Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

eli_sennesh comments on Siren worlds and the perils of over-optimised search - Less Wrong

27 Post author: Stuart_Armstrong 07 April 2014 11:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (411)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 07 April 2014 04:13:01PM 2 points [-]

Second, we may be in a situation where we ask an AI to simulate the consequences of its choice, have a glance at it, and then approve/disapprove. That's less a search problem, and more the original siren world problem, and we should be aware of the problem.

This sounds extremely counterintuitive. If I have an Oracle AI that I can trust to answer more-or-less verbal requests (defined as: any request or "program specification" too vague for me to actually formalize), why have I not simply asked it to learn, from a large corpus of cultural artifacts, the Idea of the Good, and then explain to me what it has learned (again, verbally)? If I cannot trust the Oracle AI, dear God, why am I having it explore potential eutopian future worlds for me?

Comment author: Stuart_Armstrong 07 April 2014 05:40:35PM 9 points [-]

If I cannot trust the Oracle AI, dear God, why am I having it explore potential eutopian future worlds for me?

Because I haven't read Less Wrong? ^_^

This is another argument against using constrained but non-friendly AI to do stuff for us...