Wilka comments on Q&A with Richard Carrier on risks from AI - Less Wrong

16 Post author: XiXiDu 13 December 2011 10:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread.

Comment author: Wilka 14 December 2011 04:31:42PM 3 points [-]

For example, if we started a human-level AGI tomorrow, it's ability to revise itself would be hugely limited by our slow and expensive infrastructure (e.g. manufacturing the new circuits, building the mainframe extensions, supplying them with power, debugging the system).

This suggests that he see the limiting factor for AI to be hardware, however I've heard people argue that we probably already have the hardware needed for human-level AI if we get the software right (and I'm pretty sure that was before things like cloud computing were so easily available)

I wonder how likely he thinks it is that a single organisation today (maybe Google?) already has the hardware required to run a human-level AI and the same speed as the human brain. Assuming we magically solved all the software problems.

Comment author: timtyler 14 December 2011 11:19:36PM *  1 point [-]

This suggests that he see the limiting factor for AI to be hardware, however I've heard people argue that we probably already have the hardware needed for human-level AI if we get the software right

We do, but it's not cost effective or fast enough - so humans are cheaper and (sometimes) better. Within a decade, the estimated hardware may cost around 100 USD and the performance difference won't be there. Sometime around then, things seem likely to get more interesting.