Vladimir_Nesov comments on Q&A with Michael Littman on risks from AI - Less Wrong

15 Post author: XiXiDu 19 December 2011 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 19 December 2011 12:43:00PM *  20 points [-]

I think this expert is anthropomorphizing too much. To pose an extinction risk, a machine doesn't even need to talk, much less replicate all the accidental complexity of human minds. It just has to be good at physics and engineering.

These tasks seem easier to formalize than many other things humans do: in particular, you could probably figure out the physics of our universe from very little observational data, given a simplicity prior and lots of computing power (or a good enough algorithm). Some engineering tasks are limited by computing power too, e.g. protein folding is an already formalized problem, and a machine that could solve it efficiently could develop nanotech faster than humans do.

We humans probably suck at physics and engineering on an absolute scale just like we suck at multiplying 32-bit numbers, see Moravec's paradox. And we probably suck at these tasks about as much as it's possible to suck and still build a technological civilization, because otherwise we would have built it at an earlier point in our evolution.

We now know that playing chess doesn't require human-level intelligence as Littman understands it. It may turn out that destroying the world doesn't require human-level intelligence either. A narrow AI could do just fine.

Comment author: Vladimir_Nesov 19 December 2011 01:23:26PM 8 points [-]

We now know that playing chess doesn't require human-level intelligence as Littman understands it. It may turn out that destroying the world doesn't require human-level intelligence either. A narrow AI could do just fine.

Interesting: this framing moved me more than your previous explanation.