Mark_Friedenbach comments on [link] [poll] Future Progress in Artificial Intelligence - Less Wrong

8 Post author: Pablo_Stafforini 09 July 2014 01:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 10 July 2014 07:42:10PM *  2 points [-]

My objection (and his?) is against the claim that an AI could replicate this capability in "moments," according to the "because superhuman!" line of reasoning. I find that bogus.

Let me suggest a way:

  • (1) Gain control of a single machine
  • (2) Decompile the OS code
  • (3) Run a security audit on the OS, find exploits

Even easier if the OS is open-sourced.

Comment author: [deleted] 10 July 2014 08:01:26PM *  0 points [-]

That is a monumentally difficult undertaking, unfeasible with current hardware limitations, certainly impossible in the "moments" timescale.

Comment author: gwern 10 July 2014 08:26:51PM 3 points [-]

That is a monumentally difficult undertaking, unfeasible with current hardware limitations

I think you underestimate the state of the art, such as the SAT/SMT-solver revolution in computer security. They automatically find exploits all the time, against OSes and libraries and APIs.

Comment author: [deleted] 11 July 2014 04:36:18AM *  -1 points [-]

I think you underestimate the state of the art, such as the SAT/SMT-solver revolution in computer security. They automatically find exploits all the time, against OSes and libraries and APIs.

I think you miss my point. These SAT solvers are extremely expensive, and don't scale well to large code bases. You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library. They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack ... if they did, we humans would have done that already.

So to be clear, this UFAI breakout scenario is assuming the AI already has access to massive amounts of computing hardware, which it can re-purpose to long-duration HPC applications while escaping detection. And even if you find that realistic, I still wouldn't use the word "momentarily."

Comment author: jimrandomh 12 July 2014 05:35:51PM *  5 points [-]

I think you miss my point. These SAT solvers are extremely expensive, and don't scale well to large code bases. You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library. They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack ... if they did, we humans would have done that already.

They have done that already. For example, this paper: "We implement our approach using a popular graph database and demonstrate its efficacy by identifying 18 previously unknown vulnerabilities in the source code of the Linux kernel."

Comment author: gwern 11 July 2014 05:42:00PM 1 point [-]

You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library.

Large clusters like... the ones that an AI would be running on?

They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack ... if they did, we humans would have done that already.

They don't have to scale although that may be possible given increases in computing power (you only need to find an exploit somewhere, not all exploits everywhere), and I am skeptical we humans would, in fact, 'have done that already'. That claim seems to prove way too much: are existing static code analysis tools applied everywhere? Are existing fuzzers applied everywhere?

Comment author: Lumifer 10 July 2014 09:05:33PM *  2 points [-]

That is a monumentally difficult undertaking

Why in the world would a security audit of a bunch of code be "monumentally difficult" for an AI..?

Comment author: [deleted] 11 July 2014 04:38:52AM 0 points [-]

It requires an infeasible amount of computation for us humans to do. Why do you suppose it would be different for an AI?

Comment author: Lumifer 11 July 2014 02:57:07PM 1 point [-]

It requires an infeasible amount of computation for us humans to do.

Um. Humans -- in real life -- do run security audits of software. It's nothing rare or unusual. Frequently these audits are assisted by automated tools (e.g. checking for buffer overruns, etc.). Again, this is happening right now in real life and there are no "infeasible amount of computation" problems.

Comment author: asr 11 July 2014 05:28:24AM *  1 point [-]

Doing an audit to catch all vulnerabilities is monstrously hard. But finding some vulnerabilities is a perfectly straightforward technical problem.

It happens routinely that people develop new and improved vulnerability detectors that can quickly find vulnerabilities in existing codebases. I would be unsurprised if better optimization engines in general lead to better vulnerability detectors.