army1987 comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: kalla724 17 May 2012 07:00:57PM *  3 points [-]

I'm not so much interested in the exact mechanism of how humans would be convinced to go to war, as in an even approximate mechanism by which an AI would become good at convincing humans to do anything.

Ability to communicate a desire and convince people to take a particular course of action is not something that automatically "falls out" from an intelligent system. You need a theory of mind, an understanding of what to say, when to say it, and how to present information. There are hundreds of kids on autistic spectrum who could trounce both of us in math, but are completely unable to communicate an idea.

For an AI to develop these skills, it would somehow have to have access to information on how to communicate with humans; it would have to develop the concept of deception; a theory of mind; and establish methods of communication that would allow it to trick people into launching nukes. Furthermore, it would have to do all of this without trial communications and experimentation which would give away its goal.

Maybe I'm missing something, but I don't see a straightforward way something like that could happen. And I would like to see even an outline of a mechanism for such an event.

Comment author: [deleted] 17 May 2012 07:40:58PM 3 points [-]

For an AI to develop these skills, it would somehow have to have access to information on how to communicate with humans; it would have to develop the concept of deception; a theory of mind; and establish methods of communication that would allow it to trick people into launching nukes. Furthermore, it would have to do all of this without trial communications and experimentation which would give away its goal.

I suspect the Internet contains more than enough info for a superhuman AI to develop a working knowledge of human psychology.

Comment author: kalla724 17 May 2012 08:09:30PM 2 points [-]

Only if it has the skills required to analyze and contextualize human interactions. Otherwise, the Internet is a whole lot of jibberish.

Again, these skills do not automatically fall out of any intelligent system.

Comment author: XiXiDu 18 May 2012 09:14:41AM 0 points [-]

I suspect the Internet contains more than enough info for a superhuman AI to develop a working knowledge of human psychology.

I don't see what justifies that suspicion.

Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar, at least, and then wait for the environment to provide a lot of feedback.

Therefore even if we’re talking about the emulation of a grown up mind, it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?

Can we even attempt to imagine what is wrong about a boxed emulation of a human toddler, that makes it unable to become a master of social engineering in a very short time?

Comment author: NancyLebovitz 18 May 2012 12:47:15PM *  2 points [-]

Humans learn most of what they know about interacting with other humans by actual practice. A superhuman AI might be considerably better than humans at learning by observation.

Comment author: [deleted] 18 May 2012 05:39:42PM *  1 point [-]

Just imagine you emulated a grown up human mind

As a “superhuman AI” I was thinking about a very superhuman AI; the same does not apply to slightly superhuman AI. (OTOH, if Eliezer is right then the difference between a slightly superhuman AI and a very superhuman one is irrelevant, because as soon as a machine is smarter than its designer, it'll be able to design a machine smarter than itself, and its child an even smarter one, and so on until the physical limits set in.)

all of the hard coded capabilities of a human toddler

The hard coded capabilities are likely overrated, at least in language acquisition. (As someone put it, the Kolgomorov complexity of the innate parts of a human mind cannot possibly be more than that of the human genome, hence if human minds are more complex than that the complexity must come from the inputs.)

Also, statistic machine translation is astonishing -- by now Google Translate translations from English to one of the other UN official languages and vice versa are better than a non-completely-ridiculously-small fraction of translations by humans. (If someone had shown such a translation to me 10 years ago and told me “that's how machines will translate in 10 years”, I would have thought they were kidding me.)