Perhaps they see you as splitting hairs between being seen as taking over the world, and actually taking over the world. In your scenario you are not seen as taking over the world because you eliminate the ability to see that - but that means that you've actually taken over the world (to a degree greater than anyone has ever achieved before).
But in point of fact, you're right about the claim as stated. As for the downvotes - voting is frequently unfair, here and everywhere else.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Thanks for explaining!
I didn't mean to split hairs at all. I'm surprised that so many here seem to take it for granted that, if one had unlimited power, one would choose to let other people remain having some say, and some autonomy. If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.
And besides:
Suppose I'd have less than unlimited power but still "rather complete" power over every human being, and suppose I'd create what would be a "utopia" only to some, but without changing anybody's mind against their will, and suppose some people would then hate me for having created that "utopia". Then why would they hate me? Because they would be unhappy. If I'd simply make them constantly happy by design - I wouldn't even have to make them intellectually approve of my utopia to do that - they wouldn't hate me, because a happy person doesn't hate.
Therefore, even in a scenario where I had not only "taken over the world", but where I would also be seen as having taken over the world, nobody would still hate me.
Suppose you'd say it would be wrong of me to make the haters happy "against their will". Why would that be wrong, if they would be happy to be happy once they have become happy? Should we not try to prevent suicides either? Not even the most obviously premature suicides, not even temporarily, not even only to make the suicide attempter rethink their decision a little more thoroughly?
Making a hater happy "against his will", with the result that he stops hating, is (I think) comparable to preventing a premature suicide in order to give that person an opportunity to reevaluate his situation and come to a better decision (by himself). By respecting what a person wants right now only, you are not respecting "that person including who he will be in the future", you are respecting only a tiny fraction of that. Strictly speaking, even the "now" we are talking about is in the future, because if you are now deciding to act in someone's interest, you should base your decision on your expectation of what he will want by the time your action would start affecting him (which is not exactly now), rather than what he wants right now. So, whenever you respect someone's preferences, you are (or at least should be) respecting his future preferences, not his present ones.
(Suppose for example that you strongly suspect that, in one second from now, I will prefer a painless state of mind, but that you see that right now, I'm trying to cut off a piece of wood in a way that you see will make me cut me in my leg in one second if you don't interfere. You should then interfere, and that can be explained by (if not by anything else) your expectation of what I will want one second from now, even if right now I have no other preference than getting that piece of wood cut in two.)
I suggest one should respect another persons (expected) distant future preferences more than his "present" (that is, very close future) ones, because his future preferences are more numerous (since there is more time for them) than his "present" ones. One would arguably be respecting him more that way, because one would be respecting more of his preferences - not favoring any one of his preferences over any other one just because it happens to take place at a certain time.
This way, hedonistic utilitarianism can be seen as compatible with preference utilitarianism.