timtyler comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread.

Comment author: timtyler 14 November 2011 01:57:11PM *  4 points [-]

Even if the AGI is not told to hold, e.g. compute as many digits of Pi as possible, I consider it an far-fetched assumption that any AGI intrinsically cares to take over the universe as fast as possible to compute as many digits of Pi as possible. Sure, if all of that are presuppositions then it will happen, but I don’t see that most of all AGI designs are like that. Most that have the potential for superhuman intelligence, but who are given simple goals, will in my opinion just bob up and down as slowly as possible.

It seems to be a kind-of irrelevant argument, since the stock market machines, query answering machines, etc. that humans actually build mostly try and perform their tasks as quickly as they can. There is not much idle thumb-twiddling in the real world of intellligent machines.

It doesn't much matter what machines who are not told to ack quickly will do - we want machines to do things fast, and will build them that way.