A putative new idea for AI control; index here.
For anyone but an extreme total utilitarian, there is a great difference between AIs that would eliminate everyone as a side effect of focusing on their own goals (indifferent AIs) and AIs that would effectively eliminate everyone through a bad instantiation of human-friendly values (false-friendly AIs). Examples of indifferent AIs are things like paperclip maximisers, examples of false-friendly AIs are "keep humans safe" AIs who entomb everyone in bunkers, lobotomised and on medical drips.
The difference is apparent when you consider multiple AIs and negotiations between them. Imagine you have a large class of AIs, and that they are all indifferent (IAIs), except for one (which you can't identify) which is friendly (FAI). And you now let them negotiate a compromise between themselves. Then, for many possible compromises, we will end up with most of the universe getting optimised for whatever goals the AIs set themselves, while a small portion (maybe just a single galaxy's resources) would get dedicated to making human lives incredibly happy and meaningful.
But if there is a false-friendly AI (FFAI) in the mix, things can go very wrong. That is because those happy and meaningful lives are a net negative to the FFAI. These humans are running dangers - possibly physical, possibly psychological - that lobotomisation and bunkers (or their digital equivalents) could protect against. Unlike the IAIs, which would only complain about the loss of resources to the FAI, the FFAI finds the FAI's actions positively harmful (and possibly vice versa), making compromises much harder to reach.
And the compromises reached might be bad ones. For instance, what if the FAI and FFAI agree on "half-lobotomised humans" or something like that? You might ask why the FAI would agree to that, but there's a great difference to an AI that would be friendly on its own, and one that would choose only friendly compromises with a powerful other AI with human-relevant preferences.
Some designs of FFAIs might not lead to these bad outcomes - just like IAIs, they might be content to rule over a galaxy of lobotomised humans, while the FAI has its own galaxy off on its own, where its humans take all these dangers. But generally, FFAIs would not come about by someone designing a FFAI, let alone someone designing a FFAI that can safely trade with a FAI. Instead, they would be designing a FAI, and failing. And the closer that design got to being FAI, the more dangerous the failure could potentially be.
So, when designing an FAI, make sure to get it right. And, though you absolutely positively need to get it absolutely right, make sure that if you do fail, the failure results in a FFAI that can safely be compromised with, if someone else gets out a true FAI in time.
I question the accuracy of your mental model of Stuart_Armstrong, and of your reading of what he wrote. There are many ways in which an insufficiently friendly AI could harm us, and they aren't all about "overriding difference" or "less freedom". If (e.g.) people are entombed in bunkers, lobotomized and on medical drips, lack of freedom is not their only problem. (I confess myself at a bit of a disadvantage here, because I don't know exactly what you mean by "overriding difference"; it doesn't sound to me equivalent to lacking freedom, for instance. Your love of neologism is impeding communication.)
I don't believe you have any good reason to think he isn't. All you know is that he is currently posting a lot of stuff about something else, and it appears that this bothers you.
Allow me to answer the question that I think is implicit in your first paragraph. The reason why I'm making a fuss about this is that you are doing something incredibly rude: barging into a discussion that has nothing at all to do with your pet obsession and trying to wrench the discussion onto the topic you favour. (And, in doing so, attacking someone who has done nothing to merit your attack.)
I have seen online communities destroyed by individuals with such obsessions. I don't think that's a serious danger here; LW is pretty robust. But, although you don't have the power to destroy LW, you do (unfortunately) have the power to make every discussion here just a little bit more annoying and less useful, and I am worried that you are going to try, and I would like to dissuade you from doing it.