If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
ELI5...
Why can't we program hard stops into AI, where it is required to pause and ask for further instruction?
Why is "spontaneous emergence of consciousness and evil intent" not a risk?
Because instructions are words, and "ask for instructions" implies an ability to understand and a desire to follow. The desire to follow instructions according to their givers' intentions is more-or-less a restatement of the Hard Problem of FAI itself: how do we formally specify a utility function that converges to our own in the limit of increasing optimization power and autonomy?