waveman comments on Open thread, Oct. 03 - Oct. 09, 2016 - Less Wrong

4 Post author: MrMind 03 October 2016 06:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (175)

You are viewing a single comment's thread. Show more comments above.

Comment author: skeptical_lurker 04 October 2016 05:23:48AM *  3 points [-]

I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."

The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.

Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.

Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:

"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."

Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)

As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.

Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.

Comment author: waveman 07 October 2016 11:39:25AM *  0 points [-]

One perhaps useful analogy for super-intelligence going wrong is corporations.

We create corporations to serve our ends. They can do things we cannot do as individuals. But in subtle and not-so-subtle ways corporations can behave in very destructive ways. One example might be the way that they pursue profit at the cost of in some cases ruining people's lives, damaging the environment, corrupting the political process.

By analogy it seems plausible that super-intelligences may behave in a way that is against our interests.

It is not valid to assume that a super-intelligence will be smart enough to discern true human interests, or that it will be motivated to act on this knowledge.

Comment author: TheAncientGeek 08 October 2016 05:01:29PM 0 points [-]

But are corporations existiential threats?

Comment author: Lumifer 07 October 2016 02:27:32PM 2 points [-]

Are you saying that no complex phenomenon is going to be able to provide only benefits and nothing but benefits, or are you saying that corporations are, on the balance, bad things and we would have been better to never have invented them?

Comment author: waveman 07 October 2016 09:58:56PM 0 points [-]

Are you saying that no complex phenomenon is going to be able to provide only benefits

No. Maybe it is possible. I am suggesting that it is not automatic that our creations serve our interests.

are you saying that corporations are, on the balance, bad things and we would have been better to never have invented them?

No. Saying something has harmful effects is not the same as saying that it is overall bad.

I am illustrating ways in which our creations can fail to serve our interests.

  • They do not have to be onmiscient to be smarter in some respects than human individuals.

  • It is hard to control their actions and to make sure they do serve our interests.

  • These effects can be subtle and difficult to understand.