You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lyyce comments on [link] Disjunctive AI Risk Scenarios - Less Wrong Discussion

10 Post author: Kaj_Sotala 05 April 2016 12:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread.

Comment author: Lyyce 05 April 2016 04:10:10PM 1 point [-]

I'm not sure an intelligence explosion can happen without significant speed or computational power improvements.

I guess it boils down to what happens if you let human-level intelligence self-modify without modifying the hardware (a.k.a how much human intelligence is optimised). Until now the ratio results to computational power used in significantly in favor of humans compared to I.A but the later is improving fast, and you don't need an I.A to be as versatile as human. Is there any work on what the limit on optimisation for intelligence?

It looks like a nitpick since hardware capacity is increasing steadily and will soon exceed the capacities of the human brain, but it is a lot easier to prevent intelligence explosion by putting a limit on the computational power.

Comment author: Kaj_Sotala 06 April 2016 12:03:19PM *  0 points [-]

It's unclear, but in narrow AI we've seen software get smarter even in cases where the hardware is kept constant, or even made worse. For example, the top chess engine of 2014 beats a top engine from 2006, even when you give the 2014 engine 2% the computing power of the 2006 engine. That would seem to suggest that an intelligence explosion without hardware improvements might be possible, at least in principle.

In practice I would expect an intelligence explosion to lead to hardware improvements as well, though. No reason for the AI to constrain itself just to the software side.