My entry:
Raising Moral AI
Is it easier to teach a robot to stay safe by not tearing off its own limbs and not drilling holes in its head and not touching lava and not falling from a cliff and so on ad infinitum, or introduce pain as inescapable consequence of such actions and let robot experiment and learn?
Similarly, while trying to create a safe AGI, it is futile to make exhaustive and non-contradictory set of rules (values, policies, laws, committees) due to infinite complexity. A powerful AGI agent might find an exception or conflict in rules and...
Any winners?