bokov comments on Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda - Less Wrong

25 Post author: RobbBB 26 November 2014 11:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: MaximumLiberty 26 November 2014 11:14:41PM 0 points [-]

I'm a super-dummy when it comes to thinking about AI. I rightly leave it to people better equipped and more motivated than me.

But, can someone explain to me why a solution would not involve some form of "don't do things to people or their property without their permission"? Certainly, that would lead to a sub-optimal use of AI in some people's opinions. But it would completely respect the opinions of those who disagree.

Recognizing that I am probably the least AI-knowledgeable person to have posted a comment here, I ask, what am I missing?

Comment author: bokov 16 March 2015 10:48:11AM 0 points [-]

it's not strictly an AI problem-- any sufficiently rapid optimization process bears the risk of irretrievably converging on an optimum nobody likes before anybody can intervene with an updated optimization target.

individual and property rights are not rigorously specified enough to be a sufficient safeguard against bad outcomes even in an economy moving at human speeds

in other words the science of getting what we ask for advances faster than the science of figuring out what to ask for