You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Viliam_Bur comments on Another type of intelligence explosion - Less Wrong Discussion

16 Post author: Stuart_Armstrong 21 August 2014 02:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 25 August 2014 05:41:36PM 1 point [-]

The question is how difficult it is to jump from the stupid AI to the general AI. Does it require hundred gradual improvements? Or could just one right improvement in the right situation jump across the whole abyss? Something like taking the "idiot savant golem with severe autism" who cares only about one specific goal, and replacing the goal with "understand everything, and apply this understanding to improving your own functionality"... and suddenly we have the fully general AI.

Remember that compartmentalization exists in human minds, but the world is governed by universal laws. In some sense, "understanding particles" is all you need. And of course some techniques to overcome computational costs, such as creating and using higher-level models. -- With the higher-level models, compartmentalization can return, but maybe it would be different for a mind that could work not just within these models, but also to create and modify them as necessary, as opposed to the human mind, which has some of those levels hardwired and the other ones always feel a bit "strange".

Being good at translating thousand languages is not scary. Being good at modelling thousand situations, probably yes.