You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mark_Friedenbach comments on AIFoom Debate - conclusion? - Less Wrong Discussion

11 Post author: Bound_up 04 March 2016 08:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 05 March 2016 09:55:38AM *  3 points [-]

I tried to explain it in my recent post, that on current level of technologies human level AGI is possible, but foom is not yet, in particular, because some problems with size, speed and the way neural nets are learning.

Also human level AGI is not powerful enough to foam. Human science is developing but in includes millions of scientists; foaming AI should be of the same complexity but run 1000 times quicker. We don't have such hardware. http://lesswrong.com/lw/n8z/ai_safety_in_the_age_of_neural_networks_and/

But the field of AI research is foaming with doubling time 1 year now.

Comment author: [deleted] 06 March 2016 04:08:13PM 0 points [-]

Doubling time of 1 year is not a FOOM. But thank you for taking the time to write up a post on AI safety pulling from modern AI research.

Comment author: turchin 06 March 2016 06:03:28PM 1 point [-]

It is not foom, but in 10-20 years it results will be superinteligence. I am now writing a post that will give more details about how I see it - the main idea will be that AI speed improvement will be at hyperbolic law, but it will evolve as a whole environment, not a single fooming AI agent.