Zaine comments on Eudaimonic Utilitarianism - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (34)
This is an imperative for any rational agent insofar as the situation warrants. To assist in this process, philosopher's develop decision theories. Decision theories are designed to assist an agent in processing information, and deciding a course of action, in furtherance of the agent's values; they do not assist in determining what is worth valuing. Theories of proper moral conduct fill this gap.
That does indeed seem like an intermediary course of action designed to further the values of both Collective-B and Agent A. This still feels unsatisfactory, but as I cannot reason why, I must conclude I have a true rejection somewhere I can't find at the moment. I was going to point out that the above scenario doesn't reflect human behaviour, but there's no need: it demonstrates the moral ideal to which we should strive.
Perhaps I object with the coining, as it seems a formalisation of what many do anyway, yet that's no reason to - Aha!
My true rejection lies in your theory's potential for being abused. Were one to claim they knew better than any other what would achieve others' Areté, they could justify behaviour that in fact infringes upon others' quest for Areté; they could falsely assume the role of Omega.
In the counter case of Preference Utilitarianism, one must account for the Preferences of others in their own utility calculation. Though it has the same pitfall, wherein one claims they know the 'true' preference of others' differs from their 'manifest' preference.
The difference lies in each theory's foundations. Preference utilitarianism is founded upon the empathic understanding that others pursuing their value function makes them, and thus those around them, more fulfilled. In your theory, one can always claim, "If you were only more rational, you would see I am in the right on this. Trust me."
One becoming an evil overlord would also constitute a moral good in your theory, if their net capacity for achievement supersedes that of those whom they prey upon. I make no judgement on this.
Honestly though, I'm nitpicking by this point. Quite clearly written (setting aside the Adultery calculation), this, and good on you essaying to incorporate eudaimonia into a coherent moral theory.