Bugmaster comments on So You Want to Save the World - Less Wrong

41 Post author: lukeprog 01 January 2012 07:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread. Show more comments above.

Comment author: dspeyer 28 December 2011 06:17:55AM 2 points [-]

Sorry if I've missed a link somewhere, but have we taken a serious look at Intelligence Amplification as a safer alternative? It saves us the problem of reverse engineering human values by simply using them as they exist. It's also less sudden, and can be spread over many people at once to keep an eye on eachother.

Comment author: lukeprog 28 December 2011 07:27:35AM 8 points [-]

Amplified human intelligence is no match for recursively self-improved AI, which is inevitable if science continues. Human-based intelligence has too many limitations. This becomes less true as you approach WBE, but then you approach neuromorphic AI even faster (or so it seems to me).

Comment author: Bugmaster 07 January 2012 03:31:56AM 1 point [-]

Amplified human intelligence is no match for recursively self-improved AI, which is inevitable if science continues.

Just to clarify, when you say "recursively self-improved", do you also imply something like "unbounded" or "with an unimaginably high upper-bound" ? If the AI managed to self-improve itself to, say, regular human genius level and then stopped, then it wouldn't really be that big of a deal.

Comment author: lukeprog 07 January 2012 04:11:50AM 0 points [-]

Right; with a high upper bound. There is plenty of room above us.