Simulation_Brain comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: utilitymonster 13 August 2010 01:30:55PM 5 points [-]

a) Something much smarter than us will do whatever it wants, and very thoroughly. (this doesn't require godlike AI, just smarter than us. Self-improving helps, too.) b) The vast majority of possible "wants" done thoroughly will destroy us. (Any goal taken to extremes will use all available matter in accomplishing it.) Therefore, it will be dangerous if not VERY carefully designed. Humans are notably greedy and bad planners individually, and often worse in groups.

I've heard a lot of variations on this theme. They all seem to assume that the AI will be a maximizer rather than a satisficer. I agree the AI could be a maximizer, but don't see that it must be. How much does this risk go away if we give the AI small ambitions?

Comment author: Simulation_Brain 13 August 2010 07:27:19PM *  3 points [-]

Now this is an interesting thought. Even a satisficer with several goals but no upper bound on each will use all available matter on the mix of goals it's working towards. But a limited goal (make money for GiantCo, unless you reach one trillion, then stop) seems as though it would be less dangerous. I can't remember this coming up in Eliezer's CFAI document, but suspect it's in there with holes poked in its reliability.