You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gwern comments on A Parable of Elites and Takeoffs - Less Wrong Discussion

23 Post author: gwern 30 June 2014 11:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 02 July 2014 03:29:31AM 0 points [-]

I don't believe AGI will be militarily useful, at least moreso than any other technology.

Other technologies have sparked arms races, so that seems like an odd position to take.

Nor do I believe that AGI will be developed on a long enough time scale for an "arms race".

If you're a 'fast takeoff' proponent, I suppose the parallels to nukes aren't of much value and you don't care whether the politicians would handle well or poorly a slow takeoff. I don't find fast takeoffs all that plausible, so these are relevant matters to me and many other people interested in AI safety.

Comment author: [deleted] 06 July 2014 12:50:07AM *  0 points [-]

Eh.. timescales are relative here. Typically when someone around here says “fast takeoff” I assume they mean something along the lines of That Alien Message -- hard takeoff on the order of a literal blink of an eye, which is pure sci-fi bunk. But I find the other extreme parroted by Luke Muehlhauser and Stuart Armstrong and others -- 50 to 100 years -- equally bogus. From the weak inside view my best predictions put the entire project on the order of 1-2 decades, and the critical "takeoff" period measured in months or a few years, depending on the underlying architecture. That's not what most people around here mean by a "fast takeoff", but it is still too fast for meaningful political reaction.