David_Gerard comments on Heading off a near-term AGI arms race - Less Wrong

7 Post author: lincolnquirk 22 August 2012 02:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eudoxia 22 August 2012 03:06:03PM 9 points [-]

Narrow AI and machine learning?

Comment author: David_Gerard 22 August 2012 03:28:22PM 1 point [-]

Sounds about right. With the occasional driverless car, which is really pretty amazing.

Comment author: billswift 22 August 2012 04:02:20PM 1 point [-]

I think a working AGI is more likely to result from expanding or generalizing from a working driverless car than from an academic program somewhere. A program to improve the "judgement" of a working narrow AI strikes me as a much more plausible route to GAI.

Comment author: Kaj_Sotala 23 August 2012 07:27:12AM *  3 points [-]

Our evolutionary history would seem to support this view - to a first approximation, it would seem to me like general intelligence effectively evolved by stacking one narrow-intelligence module on top of another.

Spiders are pretty narrow intelligence, rats considerably less so.

Comment author: JulianMorrison 24 August 2012 10:14:28PM 0 points [-]

And legoland is built of stacking bricks. But try deriving legoland by generalizing a 2x2 blue square.

Comment author: Eliezer_Yudkowsky 23 August 2012 09:39:54PM 1 point [-]

There are proverbs about how trying to generalize your code will never get to AGI. These proverbs are true, and they're still true when generalizing a driverless car. I might worry to some degree about free-form machine learning algorithms at hedge funds, but not about generalizing driverless cars.

Comment author: MugaSofer 17 September 2012 01:24:13PM *  1 point [-]

There go my wild theories about Cars backstory.

Comment author: bogus 17 September 2012 02:58:00PM 1 point [-]

Fear not. There is actual research being done on making self-driving cars more anthropomorphic, in order to enable better communication with pedestrians.

Comment author: latanius 24 August 2012 02:00:12AM *  0 points [-]

Current narrow AIs are unlikely to generalize into AGI, but they contain parts that can be used to build one :)

Comment author: Douglas_Knight 22 August 2012 09:36:05PM 1 point [-]

Note that the driverless car itself came from "an academic program somewhere."

Comment author: jmmcd 22 August 2012 05:26:45PM 0 points [-]

Has LW, or some other forum, held any useful previous discussion on this topic?

Comment author: Manfred 22 August 2012 06:21:43PM 0 points [-]

Not that I know of, but I'm pretty sure billswift's position does not represent that of most LWers.

Comment author: Dolores1984 22 August 2012 08:18:07PM 2 points [-]

It certainly doesn't represent mine. The architectural shortcomings of narrow AI do not lend themselves to gradual improvement. At some point, you're hamstrung by your inability to solve certain crucial mathematical issues.

Comment author: billswift 23 August 2012 01:42:13PM *  1 point [-]

You add a parallel module to solve the new issue and a supervisory module to arbitrate between them. There are more elaborate systems that could likely work better for many particular situations, but even this simple system suggests there is little substance to your criticism. See Minsky's Society of Mind, or some papers on modularity in evolutionary psych, for more details.

Comment author: Dolores1984 23 August 2012 03:30:04PM *  0 points [-]

Sure you can add more modules. Except that then you've got a car-driving module, and a walking module, and a stacking-small-objects module, and a guitar-playing module, and that's all fine until somebody needs to talk to it. Then you've got to write a Turing-complete conversation module, and (as it turns out) having a self-driving car really doesn't make that any easier.

Comment author: V_V 23 August 2012 11:04:49PM 3 points [-]

Do you realize that human intelligence evolved exactly that way? A self-swimming fish brain with lots of modules haphazardly attached.

Comment author: Dolores1984 23 August 2012 11:23:24PM 0 points [-]

Evolution and human engineers don't work in the same ways. It also took evolution three million years.

Comment author: jmmcd 23 August 2012 12:16:32PM 0 points [-]

I believe you, but intuitively the first objection that comes to my mind is that a car-driving AI doesn't have the same type of "agent-ness" and introspection that an AGI would surely need. I'd love to read more about it.

Comment author: atucker 23 August 2012 03:45:01AM -1 points [-]

Narrow-AI driverless cars will probably not decide that they need to take over the world in order to get to their destination in the most efficient way. Even if it would be better, I would be very surprised if they decided to model the world that generally for the purposes of driving.

There's only so much modeling of the world/general capability you need in order to solve very domain-specific problems.

Comment author: billswift 23 August 2012 01:32:12PM 0 points [-]

The reason for expanding a narrow AI is the same for a tool agent not staying restricted; the narrow domain they are designed to function in is embedded in the complexity of the real world. Eventually someone is going to realize that the agent/AI can provide better service if they understand more about how their jobs fit into the broader concerns of their passengers/users/customers and decide to do something about it.

Comment author: atucker 23 August 2012 03:33:04PM *  -1 points [-]

AIXI is able to be widely applicable because it tries to model every possible program that the universe could be running, and then it eventually starts finding programs that fit.

Driverless cars may start containing modeling things other than driving, and may even start trying to predict where their users are going to be, but I suspect that it would try and just track user habits or their smartphones, rather than trying to figure out their owner's economic and psychological incentives for going to different places.

Trying to build a car that's generally capable of driving and figuring out new things about driving might be dangerous, but there's plenty of useful features to give people before they get there.

Just wondering, is your intuition coming from the tighter tie to reality that a driverless car would have?

Comment author: Kawoomba 23 August 2012 05:14:15PM -1 points [-]

"It was terrible, officer ... my mother, she was so happy with her new automatic car! It seemed to anticipate her every need! Even when she forgot where she wanted to go, in her old age, the car would remember and take her there ... she had been so lonely ever since da' passed. I can't even fathom how the car got into her bedroom, or what it was, oh god, what it was ... doing to her! The car, it still ... it didn't know she was already ... all that blood ..."