Stuart_Armstrong comments on Indifferent vs false-friendly AIs - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (12)
Because that's an extra constraint: universe AND FFAI. The class of AIs that would be FAI with the universe, is larger than the class that would be FAI with the universe and an FFAI to deal with.
To pick a somewhat crude example, imagine an AI that maximises the soft-minimum of two quantities: human happiness and human preferences. It turns out each quantity is roughly equivalent in difficulty of satisfying (ie not too many order of magnitudes between them), so this is a FAI in our universe.
However, add a FFAI that hates human preferences and loves human happiness. Then the compromise might be on a very high happiness, which the previous FAI can live with (it was only a soft minimum, not a hard minimum).
Or maybe this is a better way of formulating things: there are FAIs, and AIs which act as FAIs given the expected conditions of the universe. It's the second category that might be very problematic in negotiations.