Roko comments on Changing accepted public opinion and Skynet - Less Wrong

15 [deleted] 22 May 2009 11:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: derekz 22 May 2009 02:57:40PM 6 points [-]

Steven, I'm a little surprised that the paper you reference convinces you of a high probability of imminent danger. I have read this paper several times, and would summarize its relevant points thusly:

  1. We tend to anthropomorphise, so our intuitive ideas about how an AI would behave might be biased. In particular, assuming that an AI will be "friendly" because people are more or less friendly might be wrong.

  2. Through self-improvement, AI might become intelligent enough to accomplish tasks much more quickly and effectively than we expect.

  3. This super-effective AI would have the ability (perhaps just as a side effect of its goal attainment) to wipe out humanity. Because of the bias in (1) we do not give sufficient credibility to this possibility when in fact it is the default scenario unless the AI is constructed very carefully to avoid it.

  4. It might be possible to do that careful construction (that is, create a Friendly AI), if we work hard on achieving that task. It is not impossible.

The only arguments for the likelihood of imminence despite little to none apparent progress toward a machine capable of acting intelligently in the world and rapidly rewriting its own source code are:

A. a "loosely analogous historical surprise" -- the above-mentioned nuclear reaction analogy. B. the observation that breakthroughs do not occur on predictable timeframes, so it could happen tomorrow. C. we might already have sufficient prerequisites for the breakthrough to occur (computing power, programming productivity, etc)

I find these points to all be reasonable enough and imagine that most people would agree. The problem is going from this set of "mights" and suggestive analogies to a probability of imminence. You can't expect to get much traction for something that might happen someday, you have to link from possibility to likelihood. That people make this leap without saying how they got there is why observers refer to the believers as a sort of religious cult. Perhaps the case is made somewhere but I haven't seen it. I know that Yudkowsky and Hanson debated a closely related topic on Overcoming Bias at some length, but I found Eliezer's case to be completely unconvincing.

I just don't see it myself... "Seed AI" (as one example of a sort of scenario sketch) was written almost a decade ago and contains many different requirements. As far as I can see, none of them have had any meaningful progress in the meantime. If multiple or many breakthroughs are necessary, let's see one of them for starters. One might hypothesize that just one magic bullet brfeakthrough is necessary but that sounds more like a paranoid fantasy than a credible scientific hypothesis.

Now, I'm personally sympathetic to these ideas (check the SIAI donor page if you need proof), and if the lack of a case from possibility to likelihood leaves me cold, it shouldn't be surprising that society as a whole remains unconvinced.

Comment deleted 23 May 2009 12:46:45PM [-]
Comment author: steven0461 23 May 2009 12:54:50PM *  1 point [-]

Indeed:

In the CES model (which this author prefers) if the next number of doubles of DT were the same as one of the last three DT doubles, the next doubling time would be either would be 1.3, 2.1, or 2.3 weeks. This suggests a remarkably precise estimate of an amazingly fast growth rate.

See also Economic Growth Given Machine Intelligence:

Let us now consider the simplest endogenous growth model ... lowering ˜α just a little, from .25 to .241, reduces the economic doubling time from 16 years to 13 months ... Reducing ˜α further to .24 eliminates diminishing returns and steady growth solutions entirely.