...and no, it's not because of potential political impact on its goals. Although that's also a thing.
The Politics problem is, at its root, about forming a workable set of rules by which society can operate, which society can agree with.
The Friendliness Problem is, at its root, about forming a workable set of values which are acceptable to society.
Politics as a process (I will use "politics" to refer to the process of politics henceforth) doesn't generate values; they're strictly an input, by which the values of society are converted into rules which are intended to maximize them. While this is true, it is value agnostic; it doesn't care what the values are, or where they come from. Which is to say, provided you solve the Friendliness Problem, it provides a valuable input into politics.
Politics is also an intelligence. Not in the "self aware" sense, or even in the "capable of making good judgments" sense, but in the sense of an optimization process. We're each nodes in this alien intelligence, and we form what looks, to me, suspiciously like a neural network.
The Friendliness Problem is equally applicable to Politics as it is to any other intelligence. Indeed, provided we can provably solve the Friendliness Problem, we should be capable of creating Friendly Politics. Friendliness should, in principle, be equally applicable to both. Now, there are some issues with this - politics is composed of unpredictable hardware, namely, people. And it may be that the neural architecture is fundamentally incompatible with Friendliness. But that is discussing the -output- of the process. Friendliness is first an input, before it can be an output.
More, we already have various political formations, and can assess their Friendliness levels, merely in terms of the values that went -into- them.
Which is where I think politics offers a pretty strong hint to the possibility that the Friendliness Problem has no resolution:
We can't agree on which political formations are more Friendly. That's what "Politics is the Mindkiller" is all about; our inability to come to an agreement on political matters. It's not merely a matter of the rules - which is to say, it's not a matter of the output: We can't even come to an agreement about which values should be used to form the rules.
This is why I think political discussion is valuable here, incidentally. Less Wrong, by and large, has been avoiding the hard problem of Friendliness, by labeling its primary functional outlet in reality as a mindkiller, not to be discussed.
Either we can agree on what constitutes Friendly Politics, or not. If we can't, I don't see much hope of arriving at a Friendliness solution more broadly. Friendly to -whom- becomes the question, if it was ever anything else. Which suggests a division in types of Friendliness; Strong Friendliness, which is a fully generalized set of human values, and acceptable to just about everyone; and Weak Friendliness, which isn't fully generalized, and perhaps acceptable merely to a plurality. Weak Friendliness survives the political question. I do not see that Strong Friendliness can.
(Exemplified: When I imagine a Friendly AI, I imagine a hands-off benefactor who permits people to do anything they wish to which won't result in harm to others. Why, look, a libertarian/libertine dictator. Does anybody envisage a Friendly AI which doesn't correspond more or less directly with their own political beliefs?)
We also can't agree on, say, the correct theory of quantum gravity. But reality is there and it works in some particular way, which we may or may not be able to discover.
The values of a friendly AI are usually assumed to be an idealization of universal human values. More precisely: when someone makes a decision, it is because their brain performs a particular computation. To the extent that this computation is the product of a specific cognitive architecture universal to our species (and not just the contingencies of their life), we could speak of "the human decision procedure", an unknown universal algorithm of decision-making implicit in how our brains are organized.
This human decision procedure includes a method of generating preferences - preferring one possibility over another. So we can "ask" the human decision procedure "what would be the best decision procedure for humans to follow?" This produces an idealized decision procedure: a human ideal for how humans should be. That idealized decision procedure is what human ethics has been struggling towards, and that is where a friendly AI should get its values, and perhaps its methods, from.
It may seem that I am assuming rather a lot about how human decision-making cognition works, but what I just described is the simplest version of the idea. There may be multiple identifiable decision procedures in the human gene pool; the genetically determined part of the human decision procedure may be largely a template with values set by experience and culture; there may be multiple conflicting equilibria at the end of the idealization process, depending on how it starts.
For example, egoism and altruism may be different computational attractors, both a possible end result of reflective idealization of the human decision procedure; in which case a "politicization" of the value-setting process is certainly possible - a struggle over initial conditions. Or it may be that once you really know how humans think - as opposed to just guessing on the basis of folk psychology and very incomplete scientific knowledge - it's apparent that this is a false opposition.
Either way, what I'm trying to convey here is a particular spirit of approach to the problem of values in friendly AI: that the answers should come from a scientific study of how humans actually think, that the true ideals and priorities of human beings are to be found by a study of the computational particulars of human thought, and that all our ideologies and moralities are just a flawed attempt by this computational process to ascertain its own nature.
If such an idealization exists, that would of course be preferable.
I suspect it doesn't, which may color my position here, but I think it's important to consider the alternatives if there isn't a generalizable ideal; specifically, we should be working from the opposing end, and try to generalize from the specific instances; even if we can't arrive at Strong Friendliness (the fully generalized ideal of human morality), we might still be able to arrive at Weak Friendliness (some generalized ideal that is at least acceptable to a majority of people).
Because the alternative for those of us who aren't neurologists, as far as I can tell, is to wait.