Lumifer comments on What should a friendly AI do, in this situation? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (66)
What is that "friendly supergoal"? It looks awfully similar to "I will not tolerate any challenges to my power".
Most goals include "I will not tolerate any challenges to my power" as a subgoal. Tolerating challenges to power to execute goals reduces the likelihood of acheiving them.
There are plenty of other things that look similar to that - such as, "I will not let an UFAI take over our future light cone"