Sebastian_Hagen comments on Superintelligence 12: Malignant failure modes - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (50)
Any level of perverse instantiation in a sufficiently powerful AI is likely to lead to total UFAI; i.e. a full existential catastrophe. Either you get the AI design right so that it doesn't wirehead itself - or others, against their will - or you don't. I don't think there's much middle ground.
OTOH, the relevance of Mind Crime really depends on the volume. The FriendlyAICriticalFailureTable has this instance:
This scenario always struck me as a (qualified) FAI success. There's a cost - and it's large in absolute terms - but the benefits will outweigh it by a huge factor, and indeed by enough orders of magnitude that even a slight increase in the probability of getting pre-empted by a UFAI may be too expensive a price to pay for fixing this kind of bug.
So cases like this - where it only happens until the AI matures sufficiently and then becomes able see that its values make this a bad idea, and stops doing it - aren't as bad as an actual FAI failure.
Of course, if it's an actual problem with the AI's value content, which causes the AI to keep on doing this kind of thing throughout its existence, that may well outweigh any good it ever does. The total cost in this case becomes hard to predict, depending crucially on just how much resources the AI spends on these simulations, and how nasty they are on average.