You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wes_W comments on Top 9+2 myths about AI risk - Less Wrong Discussion

44 Post author: Stuart_Armstrong 29 June 2015 08:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 30 June 2015 02:26:39PM 0 points [-]

Ah... so not one individual personality, but a "city" of of AI's? Well, if I see it not as a "robotic superhuman" but "robotic super-humankind" then it certainly becomes possible - a whole species of more efficient beings could of course outcompete a lower species but I was under the impression running many beings each advanced enough to be sentient (OK Yudkowsky claims intelligence is possible without sentience but how would a non-sentient being conceptualize?) would be prohibitively expensive in hardware. I mean imagine simulating all of us or at least a human city...

Comment author: Wes_W 30 June 2015 11:04:37PM *  0 points [-]

If we could build a working AGI that required a billion dollars of hardware for world-changing results, why would Google not throw a billion dollars of hardware at it?