shminux comments on Thinking soberly about the context and consequences of Friendly AI - Less Wrong

9 Post author: Mitchell_Porter 16 October 2012 04:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread.

Comment author: shminux 16 October 2012 04:50:00PM *  0 points [-]

A single AI will take over the world

I don't see how this can be avoided if the damn thing is so much smarter, it can only treat "normal" humans as pets, ants, or, best case, wild animals confined to a sanctuary for their own good.

Comment author: timtyler 16 October 2012 10:37:20PM *  -1 points [-]

A single AI will take over the world [...]

I don't see how this can be avoided if the damn thing is so much smarter [...]

Companies and governments are much smarter than humans. So far, none has taken over the world. Companies compete with other companies. Governments compete with other governments . Like that.

Comment author: shminux 16 October 2012 11:05:55PM 1 point [-]

Companies and governments are much smarter than humans.

Are they? More powerful, maybe. Often wealthier. But what evidence do you have that they are smarter? They often act rather stupidly.

Comment author: William_Quixote 18 October 2012 04:53:37AM 0 points [-]

Companies and governments are much smarter than humans. So far, none has taken over the world

give the Companies time. They're making good progress

Comment author: timtyler 20 October 2012 11:50:25AM 0 points [-]

The governments too though. A company needs to overthrow all the governments to take over the world. Not an impossible task, perhaps, but it would be quite a revolution - and probably a bad one.

Comment author: fubarobfusco 16 October 2012 06:26:42PM 0 points [-]

"Pet", "vermin", and "wild animal" (as well as "livestock" and "working animal") are all concepts that humans have come up with for our species' relationships with other species that we've been living with since forever, and have developed both instincts and cultural practices to relate to. Why would you expect them to apply to an AI's relationship to humans? Isn't that a bit, well, anthropomorphizing?

Comment author: shminux 16 October 2012 06:45:03PM 3 points [-]

Isn't that a bit, well, anthropomorphizing?

Indeed it is, a bit. This is just an analogy meant to convey that humans aren't likely to stop a foomed AI (or maybe a group of them, if such a term will even make sense) from doing what it wants, just like animals are powerless to stop determined humans.