You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jacob_cannell comments on Top 9+2 myths about AI risk - Less Wrong Discussion

44 Post author: Stuart_Armstrong 29 June 2015 08:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 30 June 2015 02:26:39PM 0 points [-]

Ah... so not one individual personality, but a "city" of of AI's? Well, if I see it not as a "robotic superhuman" but "robotic super-humankind" then it certainly becomes possible - a whole species of more efficient beings could of course outcompete a lower species but I was under the impression running many beings each advanced enough to be sentient (OK Yudkowsky claims intelligence is possible without sentience but how would a non-sentient being conceptualize?) would be prohibitively expensive in hardware. I mean imagine simulating all of us or at least a human city...

Comment author: jacob_cannell 01 July 2015 05:12:32AM 2 points [-]

We can already run neural nets with 1 billion synapses at 1000 hz on a single GPU, or 10 billion synapses at 100 hz (real-time). At current rates of growth (software + hardware), that will be up to 100 billion synapses @100 hz per GPU in just a few years.

At that point, it mainly becomes a software issue, and once AGI's become useful the hardware base is already there to create millions of them, then soon billions.