New Answer
New Comment

2 Answers sorted by

habryka

20

Technology arms race dynamics

If we have an AI arms race, then the ability for the leading company to keep secrets is highly relevant for the maximum lead that the top company can have over competitors, which in turn determines the amount of resources they can invest into safety applications

Difficulty of selectively releasing safety research from an institution

Concrete scenario: Imagine that you are FHI and some government official approaches you and says

"Hey, we are really interested in this AI Safety thing, and are considering investing multiple millions of dollars into establishing a research center on it. We know that this might produce ideas with concrete capabilities applications, but we are planning to keep the results confidential".

Should you respond with:

"that seems good, since you can just decide not to release the things if they turn out to have too many capabilities applications?"

or

"no, you will likely be worse than the AI industry at keeping dangerous information secret, so this seems like a bad idea".

Biosecurity and potential dangerous applications

In biosecurity, you have many applications of potentially dangerous technology, often for the development of better medicine. You often have both industry and government research labs doing research into those technologies. Which ones should you differentially encourage, given that a lot of the risk comes from leaking potentially highly dangerous technologies?

habryka

20

(This comment was originally made by Ruby on a private instance of LessWrong)

Habryka and I have been working on this question and the parent question Has government orindustry had greater past success in maintaining really powerful technological secrets? After a half hour conversation, this is the state of my thoughts.

Some questions feel intrinsically interesting to us, but often there’s a more practical motive, namely, a decision to be made. Here the real question is, given the opportunity, should one prefer the development of powerful (and dangerous) technologies take place in industry or government spaces? We might expect that think tanks and research organizations might be in a position to influence such choices.

Reasons keeping secret might be important

The track record of keeping secrets is relevant to our preference of government vs industry under the assumption that keeping powerful technologies is important. I am not certain of this a priori, but let’s count instances where keeping technologies secret might be important:

  • Straightforwardly, there are actors who would use the technologies to cause harm, e.g. people who would create weaponized virus strains given the chance.
  • You are in an AI arms race dynamic and your ability to maintain a lead on your competitors affords you the breathing room to do safety work. Suppose you have a 12-month lead, if you can your progress secret, then you have margin with which to do safety work while still staying ahead. If you can’t reliably keep your advances secret (and the secrets are important), you can’t make use of your lead for safety work.
  • If your AI Safety Research work involves a mix of capabilities work (which you want to keep secret) and safety work (which you want to keep public), then your ability to conduct positive-sum safety research is going depend on your ability to keep secrets.

Which kind of secrets might matter, how often do people have them?

Talking with Habryka and thinking about the question, it’s unclear how prevalent sensitive powerful technological secrets are. “Nuclear weapons” is the example which easily comes to mind, but others aren’t easily forthcoming, especially for industry.

In both government and industry it seems that it’s common to have strategic secrets “this is where we’ve put the missiles”, “this is the strategic direction of our company for next quarter.” If a rival company knew what features you were going to build, they might just replicate them. That probably isn’t a question of technical know-how.

Relatedly, it would seem that the spread of a lot of technologies is limited less by “secrets” and more by operational difficulties, resources, and lack of practical expertise. As Habryka would say, the physics behind nuclear weapons is easily knowable, but it is far easier for Israel to build nuclear weapons than North Korea due to available expertise.

The development of AI might resemble this where it comes down to a question of conceptual “software” breakthrough vs your access to large amounts of compute which you can use well. This leads into the “does AI progress come more from software vs hardware?” question.

Industry might not rely on secrets for the above reasons and further reasons. Microsoft and Oracle don’t need to worry about someone breaking and stealing their codebases because having their codebase doesn’t get you that far: you could already build your clone of their tech, and you’ll only win in the market if you’re doing something else better or are cheaper or something.

That said, Uber paid a $245M settlement for stealing driverless car tech from Google. A large sum but not necessarily more than the tech was worth. This does seem like technological secrets someone is trying to protect (and yet failed). This high profile case might imply the existence of other industry secrets. Though it might be unfair to use this to say industry is poor at keeping secrets relative to government when the secrets are much lower steaks, e.g. some driverless car knowledge vs how to make nuclear weapons.

Overall, right now, I feel uncertain and slightly confused. I actually do imagine that there a lot of “small” secrets that companies try to keep from each other which confer advantage, but nothing world changing. It’s definitely the case that Apple keeps their product plans under wraps, but they face a very real threat if someone was to emulate their products before launch. Probably not lethal though, probably not something which changes the course of war.

Related to this discussion is the kind of “espionage optimization pressure” one is under. There’s huge economic incentive to uncover Apple’s plans and I could imagine this pressure is greater than what applies when one country is trying to steal another’s technology. Like Apple has to be less vulnerable than US military labs, because simply far more people are trying to infiltrate Apple. Relatedly, Habryka’s model is that if ten people are trying to hack your company full-time, then you can’t win. I don’t have a coherent summary of this point, just the degree of prowess required to keep a secret is going to depend heavily on how many people are trying to get that secret from you and how hard. Habryka mentioned that the Manhattan program and others were full of spies despite efforts to prevent it (and again, spread of the technology was possibly limited by operational/practical ability as much as conceptual knowledge).

Comments by Ryan Carey:

"That said, Uber paid a $245M settlement for stealing driverless car tech from Google. A large sum but not necessarily more than the tech was worth. This does seem like technological secrets someone is trying to protect (and yet failed)."
If they lose a $245M secret and get compensated $245M, then they have actually not lost anything. Industry only needs to try to protect secrets in cases where they won't get compensated, such as from Chinese rival companies. Although in practice, if they can lose their tech to a domes
... (read more)
2 comments, sorted by Click to highlight new comments since:
[-]Elo10

private instance of LessWrong

I'm sorry, what? please explain.

(Moved your comment to the top-level)

We set up a separate server with the LessWrong code and used it to test out the related question features that you now see. Since adding related questions is the kind of thing you can't really try out on the live-server and the whole feature went through multiple iterations and schema changes while we were trying it out. We do this all the time to try out various features before we push them live, or before we decide to scrap them.