timtyler comments on AI risk: the five minute pitch - Less Wrong

9 Post author: Stuart_Armstrong 08 May 2012 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (5)

You are viewing a single comment's thread.

Comment author: timtyler 09 May 2012 10:27:40AM *  0 points [-]

The best summary I can give here is that AIs are expected to be expected utility maximisers that completely ignore anything which they are not specifically tasked to maximise.

Counter example: incoming asteroid.

Comment author: BlackNoise 09 May 2012 01:21:15PM 3 points [-]

I thought utility maximizers were allowed to make the inference "Asteroid Impact -> reduced resources -> low utility -> action to prevent that from happening", kinda part of the reason for why AI is so dangerous: "Humans may interfere - > Humans in power is low utility -> action to prevent that from happening"

They ignore anything but what they're maximizing in the sense that they don't follow the Spirit of the code but rather its Letter, all the way to the potentially brutal (for Humans) conclusions.