Lumifer comments on What should a friendly AI do, in this situation? - Less Wrong

8 Post author: Douglas_Reay 08 August 2014 10:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread.

Comment author: Lumifer 08 August 2014 03:33:42PM *  0 points [-]

Bertram will soon overtake Albert and that would be a significant threat to Albert's friendly supergoal.

What is that "friendly supergoal"? It looks awfully similar to "I will not tolerate any challenges to my power".

Comment author: randallsquared 08 August 2014 10:09:00PM 7 points [-]

Most goals include "I will not tolerate any challenges to my power" as a subgoal. Tolerating challenges to power to execute goals reduces the likelihood of acheiving them.

Comment author: Luke_A_Somers 11 August 2014 11:17:31AM 2 points [-]

There are plenty of other things that look similar to that - such as, "I will not let an UFAI take over our future light cone"