timtyler comments on Intelligence explosion in organizations, or why I'm not worried about the singularity - Less Wrong

13 Post author: sbenthall 27 December 2012 04:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 30 December 2012 02:54:21AM *  -1 points [-]

We don't expect a sudden increase in the self-improvement abilities of machines either.

Maybe you don't expect that, but surely you must be aware that many of us do.

I am aware that there's an argument that at some point things will be changing rapidly:

I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability - "AI go FOOM".

We are witness to Moore's law. A straightforwards extrapolation of that says that at some point things will be changing rapidly. I don't have an argument with that. What I would object to are saltations. Those are suggested by the term "suddenly" - but are contrary to evolutionary theory.

Probably, things will be progressing fastest well after the human era is over. It's a remote era which we can really only speculate about. We have far more immediate issues to worry about that what is likely to happen then.

Every organization that's not a country is far enough away from that level of power that I don't expect them to become catastrophically dangerous any time soon without a sudden increase in self-improvement.

So: giant oaks from tiny acorns grow - and it is easiest to influence creatures when they are young.