Johnicholas comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread.

Comment author: Johnicholas 31 January 2011 12:38:40PM *  3 points [-]

I think this analysis assumes or emphasizes a false distinction between humans and "AI". For example, Searle's Room is an artificial intelligence built partly out of a human. It is easy to imagine intelligences built strictly out of humans, without paperwork. When humans behave like humans, we naturally form supervening entities (groups, tribes, memes).

I tried to rephrase Chalmers' four-point argument without making a distinction between humans acting "naturally" (whatever that means) and "artificial intelligences":

  1. There is some degree of human intelligence and capabilities. In particular, human intelligence and capabilities has always involved manipulating the world indirectly (mediated by other humans or by nonhuman tools). "There is I"

  2. Since intelligence and capabilities are currently helpful in modifying ourselves and our tools, as we apply our intelligence and capabilities to ourselves and our tools, we will grow in intelligence and capabilities. "If there is I, there will be I+"

  3. If this self-applicability continues for many cycles, we will become very smart and capable. "If there is I+, there will be I++".

  4. Therefore, we will become very smart and very capable. "There will be I++."

I'm not trying to dismiss the dangers involved in this process; all I'm saying is that the language used feeds a Skynet "us versus them" mentality that isn't helpful. Admitting that "We have met the enemy and he is us." focuses attention where it ought to be.

A lot of AI-risks dialogue is a blend of: foolish people focusing on Skynet scenarios, foolish rhetoric (whatever the author is thinking) alluding to Skynet scenarios, and straightforward sensible policies that could and should be separated from the bad science fiction.

This is what I mean by straightforward, sensible, non-sf policies: We have always made mistakes when using tools. Software tools allow us to make more mistakes faster, especially "unintended consequences" mistakes. We should put effort into developing more safety techniques guarding against unintended consequences of our software tools.

Comment author: shokwave 31 January 2011 12:56:08PM 2 points [-]

Sci-fi policies can't be good policies?

Comment author: Leonhart 31 January 2011 01:24:07PM 2 points [-]

What mentality other than "us versus them" would be even remotely helpful for dealing with a UFAI?

We have met the enemy and we are paperclips.

Comment author: shokwave 31 January 2011 02:19:25PM *  1 point [-]

"Us versus them" presupposes the existence of them, ie UFAI. Which means we have probably already lost. So really, no mentality would be remotely helpful for dealing with an existing UFAI.