Dmytry comments on SotW: Be Specific - Less Wrong

38 Post author: Eliezer_Yudkowsky 03 April 2012 06:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (306)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 07 April 2012 01:52:07PM *  -2 points [-]

CEV is not defined to do what you as-is actually want, but to do what you would of wanted, even in circumstances when you as-is actually want something else, as the 2004 paper cheerfully explains.

In any case, once you assume such intent-understanding interpretative powers of AI, it's hard to demonstrate why instructing the AI in plain English to "Be a good guy. Don't do bad things" would not be a better shot.

Comment author: wedrifid 07 April 2012 02:02:26PM 2 points [-]

In any case, once you assume such intent-understanding interpretative powers of AI

Programmed in with great effort, thousands of hours of research and development and even then great chance of failure. That isn't "assumption".

it's hard to demonstrate why instructing the AI in plain English to "Be a good guy. Don't do bad things" would not be a better shot.

That would seem to be a failure of imagination. That exhortation tells even an FAI-complete AI that is designed to follow commands to do very little.