timtyler comments on AI risk: the five minute pitch - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (5)
Counter example: incoming asteroid.
I thought utility maximizers were allowed to make the inference "Asteroid Impact -> reduced resources -> low utility -> action to prevent that from happening", kinda part of the reason for why AI is so dangerous: "Humans may interfere - > Humans in power is low utility -> action to prevent that from happening"
They ignore anything but what they're maximizing in the sense that they don't follow the Spirit of the code but rather its Letter, all the way to the potentially brutal (for Humans) conclusions.