Manfred comments on Heading off a near-term AGI arms race - Less Wrong

7 Post author: lincolnquirk 22 August 2012 02:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 22 August 2012 06:21:43PM 0 points [-]

Not that I know of, but I'm pretty sure billswift's position does not represent that of most LWers.

Comment author: Dolores1984 22 August 2012 08:18:07PM 2 points [-]

It certainly doesn't represent mine. The architectural shortcomings of narrow AI do not lend themselves to gradual improvement. At some point, you're hamstrung by your inability to solve certain crucial mathematical issues.

Comment author: billswift 23 August 2012 01:42:13PM *  1 point [-]

You add a parallel module to solve the new issue and a supervisory module to arbitrate between them. There are more elaborate systems that could likely work better for many particular situations, but even this simple system suggests there is little substance to your criticism. See Minsky's Society of Mind, or some papers on modularity in evolutionary psych, for more details.

Comment author: Dolores1984 23 August 2012 03:30:04PM *  0 points [-]

Sure you can add more modules. Except that then you've got a car-driving module, and a walking module, and a stacking-small-objects module, and a guitar-playing module, and that's all fine until somebody needs to talk to it. Then you've got to write a Turing-complete conversation module, and (as it turns out) having a self-driving car really doesn't make that any easier.

Comment author: V_V 23 August 2012 11:04:49PM 3 points [-]

Do you realize that human intelligence evolved exactly that way? A self-swimming fish brain with lots of modules haphazardly attached.

Comment author: Dolores1984 23 August 2012 11:23:24PM 0 points [-]

Evolution and human engineers don't work in the same ways. It also took evolution three million years.

Comment author: V_V 24 August 2012 12:07:11AM 4 points [-]

True enough, but there is no evidence that general intelligence is anything more than a large collection of specialized modules.

Comment author: jmmcd 23 August 2012 12:16:32PM 0 points [-]

I believe you, but intuitively the first objection that comes to my mind is that a car-driving AI doesn't have the same type of "agent-ness" and introspection that an AGI would surely need. I'd love to read more about it.