geebee2 comments on Long-Term Technological Forecasting - Less Wrong

22 Post author: lukeprog 11 January 2012 04:13AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (3)

You are viewing a single comment's thread.

Comment author: geebee2 16 January 2012 08:54:31PM *  -1 points [-]

"When will AGI be created?"

I'm not sure this means very much. How would we be able to tell?

Computers are already far superior to humans for many tasks. I expect more of the same in the future, with computers being delegated to take on increasingly complex tasks. I don't however see that any "singularity" is likely - rather a relatively smooth progression from what is possible today towards more difficult problems that can be solved in the future.

Even supposing computers were to advance to a state of "intelligence" where they could say invent interesting new mathematics, I'm not sure that this would have any profound consequences, any more than a chess playing computer that can beat a human has any profound consequences.

It's possible to imagine that a very powerful "intelligent" computer could somehow run amok, but we are so far from such a possibility that it hardly seems worth worrying about it now. I'd worry more about human dangers ( fascism, totalitarian regimes ) since they seem to appear and become dangerous quite frequently. For example, should we be worried about China?