Comment author: skeptical_lurker 04 October 2016 05:23:48AM *  3 points [-]

I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."

The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.

Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.

Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:

"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."

Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)

As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.

Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.

Comment author: SoerenE 04 October 2016 01:27:42PM 2 points [-]

No, a Superintelligence is by definition capable of working out what a human wishes.

However, a Superintelligence designed to e.g. calculate digits of pi would not care about what a human wishes. It simply cares about calculating digits of pi.

In response to May Outreach Thread
Comment author: SoerenE 08 May 2016 06:41:33PM 3 points [-]

In a couple of days, we are hosting a seminar in Ã…rhus (Denmark) on AI Risk.

Comment author: Huluk 26 March 2016 12:55:37AM *  26 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: SoerenE 29 March 2016 05:54:11PM 35 points [-]

I have taken the survey.

Comment author: Alicorn 28 March 2016 04:14:59AM 26 points [-]

I am literally pregnant right now and wasn't sure how to answer the ones about how many children I have or if I plan more. (I went with "one" and "uncertain" but could have justified "zero" and "yes").

Comment author: SoerenE 29 March 2016 05:49:07PM 8 points [-]

Congratulations!

My wife is also pregnant right now, and I strongly felt that I should include my unborn child in the count.

Comment author: Lumifer 04 March 2016 04:53:55PM 1 point [-]

I am not sure the OP had much meaning behind his "vaguely magical" expression, but given that we are discussing it anyway :-) I would probably reinterpret it in terms of Knightian uncertainty. It's not only the case that we don't know, we don't know what we don't know and how much we don't know.

Comment author: SoerenE 04 March 2016 07:30:40PM 0 points [-]

This interpretation makes a lot of sense. The term can describe events that have a lot of Knightian Uncertainty, which a "Black Swan" like UFAI certainly has.

Comment author: Lumifer 03 March 2016 08:12:36PM *  0 points [-]

We have no clear idea of how humanity will travel to the stars, but the subject is neither "vaguely magical", nor is it true that the sentence "humans will visit the stars" does not refer to anything.

We have no clear idea if or how humanity will travel to the stars. I feel that discussions of things like interstellar starship engines at the moment are "vaguely magical" since no known technology suffices and it's not a "merely engineering" problem. Do you think it's useful to work on safety of interstellar engines? They could blow up and destroy a whole potential colony...

Comment author: SoerenE 04 March 2016 07:45:02AM 1 point [-]

You bring up a good point, whether it is useful to worry about UFAI.

To recap, my original query was about the claim that p(UFAI before 2116) is less than 1% due to UFAI being "vaguely magical". I am interested in figuring out what that means - is it a fair representation of the concept to say that p(Interstellar before 2116) is less than 1% because interstellar travel is "vaguely magical"?

What would be the relationship between "Requiring Advanced Technology" and "Vaguely Magical"? Clarke's third law is a straightforward link, but "vaguely magical" has previously been used to indicate poor definitions, poor abstractions and sentences that do not refer to anything.

Comment author: Lumifer 22 February 2016 03:54:05PM 1 point [-]

It's "vaguely magical" in sense that there is a large gap between what we have now and (U)FAI. We have no clear idea of how that gap could be crossed, we just wave hands and say "and then magic happens and we arrive at our destination".

Comment author: SoerenE 03 March 2016 07:52:38PM 0 points [-]

Many things are far beyond our current abilities, such as interstellar space travel. We have no clear idea of how humanity will travel to the stars, but the subject is neither "vaguely magical", nor is it true that the sentence "humans will visit the stars" does not refer to anything.

I feel that it is an unfair characterization of the people who investigate AI risk to say that they claim it will happen by magic, and that they stop the investigation there. You could argue that their investigation is poor, but it is clear that they have worked a lot to investigate the processes that could lead to Unfriendly AI.

Comment author: Lumifer 19 February 2016 08:28:56PM 2 points [-]

Could we stretch the analogy to claim 3, and call some increases in human numbers "super"?

I don't know -- it all depends on what you consider "super" :-) Populations of certain organisms oscillate with much greater magnitude than humans -- see e.g. algae blooms.

Comment author: SoerenE 20 February 2016 03:20:22PM 0 points [-]

Like Unfriendly AI, algae blooms are events that behave very differently from events we normally encounter.

I fear that the analogies have lost a crucial element. OrphanWIlde considered Unfriendly AI "vaguely magical" in the post here. The algae bloom analogy also has very vague definitions, but the changes in population size of an algae bloom is a matter I would call "strongly non-magical".

I realize that you introduced the analogies to help make my argument precise.

Comment author: _rpd 19 February 2016 06:59:08PM *  1 point [-]

Naively, the required condition is v + dH > c, where v is the velocity of the spaceship, d is the distance from the threat and H is Hubble's constant.

However, when discussing distances on the order of billions of light years and velocities near the speed of light, the complications are many, not to mention an area of current research. For a more sophisticated treatment see user Pulsar's answer to this question ...

http://physics.stackexchange.com/questions/60519/can-space-expand-with-unlimited-speed/

... in particular the graph Pulsar made for the answer ...

http://i.stack.imgur.com/Uzjtg.png

... and/or the Davis and Lineweaver paper [PDF] referenced in the answer.

Comment author: SoerenE 19 February 2016 08:16:47PM 0 points [-]

Wow. It looks like light from James' spaceship can indeed reach us, even if light from us cannot reach the spaceship.

Comment author: Lumifer 19 February 2016 03:47:57PM 3 points [-]

Ah, so you meant the accent in 3. to be on "reaches", not on "super"?

The analogy looks like this: 1. Humans multiply, they self-improve their numbers; 2. The reproduction is recursive -- the larger a generation is, the yet larger will the next one be. Absent constraints, the growth of a population is exponential.

Comment author: SoerenE 19 February 2016 08:00:50PM *  0 points [-]

English is not my first language. I think I would put the accent on "reaches", but I am unsure what would be implied by having the accent on "super". I apologize for my failure to write clearly.

I now see the analogy with human reproduction. Could we stretch the analogy to claim 3, and call some increases in human numbers "super"?

The lowest estimate of the historical number of humans I have seen is from https://en.wikipedia.org/wiki/Population_bottleneck , claiming down to 2000 humans for 100.000 years. Human numbers will probably reach a (mostly cultural) limit of 10.000.000.000. I feel that this development in human numbers deserves to be called "super".

The analogy could perhaps even be stretched to claim 4 - some places at some times could be characterized by "runaway population growth".

View more: Next