timtyler comments on Muehlhauser-Wang Dialogue - Less Wrong

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 27 April 2012 10:43:30AM 2 points [-]

With a recursively self-improving AI, once you create something able to run, running a test can turn to deploying even without programmer's intention.

So: damage to the rest of the world is what test harnesses are there to prevent. It makes sense that - if we can engineer advanced intelligences - we'll also be able to engineer methods of restraining them.

Comment author: Viliam_Bur 27 April 2012 12:04:58PM *  1 point [-]

Depends on how we will engineer them. If we build an algorithm, knowing what it does, then perhaps yes. If we try some black-box development such as "make this huge neuron network, initialize it with random data, teach it, make a few randomly modified copies and select the ones that learn fastest, etc." then I wouldn't be surprised if after first thousand failed approaches, the first one able to really learn and self-improve would do something unexpected. The second approach seems more probable, because it's simpler to try.

Also after the thousand failed experiments I predict human error in safety procedures, simply because they will feel completely unnecessary. For example, a member of the team will turn off the firewalls and connect to Facebook (for greater irony, it could be LessWrong), providing the new AI a simple escape route.

Comment author: timtyler 27 April 2012 09:56:52PM *  0 points [-]

Also after the thousand failed experiments I predict human error in safety procedures, simply because they will feel completely unnecessary.

We do have some escaped criminals today. It's not that we don't know how to confine them securely, it's more that we are not prepared to pay to do it. They do some damage, but it's tolerable. What the escaped criminals tend not to do is build huge successful empires - and challenge large corporations or governments.

This isn't likely to change as the world automates. The exterior civilization is unlikely to face serious challenges from escaped criminals. Instead it is likely to start out - and remain - much stronger than they are.

Comment author: Viliam_Bur 01 May 2012 07:36:42PM 1 point [-]

We do have some escaped criminals today.

We don't have recursively self-improving superhumanly intelligent criminals, yet. Only in comic books. Once we have a recursively self-improving superhuman AI, and it is not human-friendly, and it escapes... then we will have a comic-book situation in a real life. Except we won't have a superhero on our side.

Comment author: timtyler 01 May 2012 11:24:07PM *  -1 points [-]

That's comic-book stuff. Society is self-improving faster than its components. Component self-improvement trajectories tend to be limited by the government breaking them up or fencing them in whenever they grow too powerful.

The "superintelligent criminal" scenario is broadly like worrying about "grey goo" - or about a computer virus taking over the world. It makes much more sense to fear humans with powerful tools that magnify their wills. Indeed, the "superintelligent criminal" scenario may well be a destructive meme - since it distracts people from dealing with that much more realistic possibility.

Comment author: Viliam_Bur 02 May 2012 07:57:33AM 1 point [-]

Component self-improvement trajectories tend to be limited by the government breaking them up or fencing them in whenever they grow too powerful.

Counterexample: any successful revolution. A subset of society became strong enough to overthrow the government, despite the government trying to stop them.

It makes much more sense to fear humans with powerful tools that magnify their wills.

Could a superhuman AI use human allies and give them this kind of tools?

Comment author: timtyler 02 May 2012 11:06:36AM *  1 point [-]

Component self-improvement trajectories tend to be limited by the government breaking them up or fencing them in whenever they grow too powerful.

Counterexample: any successful revolution. A subset of society became strong enough to overthrow the government, despite the government trying to stop them.

Sure, but look at the history of revolutions in large powerful demcracies. Of course, if North Korea develops machine intelligence, a revolution becomes more likely.

It makes much more sense to fear humans with powerful tools that magnify their wills.

Could a superhuman AI use human allies and give them this kind of tools?

That's pretty-much what I meant: machine intelligence as a correctly-functioning tool - rather than as an out-of-control system.

Comment author: Viliam_Bur 02 May 2012 01:06:03PM 1 point [-]

That's pretty-much what I meant: machine intelligence as a correctly-functioning tool - rather than as an out-of-control system.

Seems to me that you simply refuse to see an AI as an agent. If AI and a human conquer the world, the only possible interpretation is that the human used the AI, never that the AI used the human. Even if it was all the AI's idea; it just means that the human used the AI as an idea generator. Even if the AI kills the human afterwards; it would just mean that the human has used the AI incorrectly and thus killed themselves.

Am I right about this?

Comment author: timtyler 03 May 2012 01:25:27AM *  0 points [-]

Er, no - I consider machines to be agents.