Tim_Tyler comments on Contaminated by Optimism - Less Wrong

10 Post author: Eliezer_Yudkowsky 06 August 2008 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (74)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Tyler 06 August 2008 09:32:26AM 0 points [-]

It is much simpler to program a goal system that responds to direct commands than to somehow try to infuse 'friendliness' into the AI.

If the AI receives commands frequently it AI would be weak - and probably not very competitive. It would be like a child running to its mummy all the time. To make decisions fast, that sort of thing is not on the cards.

If the AI receives commands infrequently, that's more-or-less what is under discussion.

However, AIs can be expected to naturally defend their goals. It may be best not to provide a convenient interface for changing them - since it could also be used to hijack the AI. That's especially true if the AI is deployed into "uncertain" territory - e.g. as a consumer robot's brain. We wouldn't want consumers to be able to reprogram the AI to kill people - that would not reflect well on the robot company's image.