People have purposes (Fyfe calls them desires, but I think purpose the more accurate term) and act to achieve them. So far, so good. However, I do not see (having read the e-book and Fyfe's general article on DU, and skimmed the FAQ) where he develops any calculus for the weighing of desires against each other that he bases the utilitarian part of the thesis on. Using easy examples like rape and child abuse just gives everyone the same bottom line to defend. Maybe Fyfe didn't build the argument in order to arrive at the "right" answers to those examples, but it's playing into that bias in his audience. I was entirely unsurprised when the author of the e-book went on to smuggle in some quite contentious claims unargued:
More practically, if we desire a stable economy and falsely believe that Libertarian strategies will deliver a stable economy, we may end up thwarting millions of desires instead of fulfilling them. If we desire good health and believe that New Age superstitions or religious prayers are more effective than scientific medicine, we may end up thwarting more desires than we fulfill. [p.41]
And no, "if" and "may" don't excuse this.
However, I do not see (having read the e-book and Fyfe's general article on DU, and skimmed the FAQ) where he develops any calculus for the weighing of desires against each other that he bases the utilitarian part of the thesis on.
I've made the same complaint on his blog.
One possible flaw with this system is that you could have propositional attitudes towards propositional attitudes, ad infinitum. So we should expect an infinite number of classes of affects (beliefs, desires, desires to desire, desires to desire to desire, etc.). I'd like to see the theory discuss whether we observe this; and if not, why not.
I asked Fyfe about this. The system handles a "desire that I have a desire that X" in exactly the same way that it handles any other desire.
The best theory of morality I've ever found is the one invented by Alonzo Fyfe, which he chose to call "desire utilitarianism."
This short e-book (warning: pdf), written by a commenter on Alonzo's blog, describes the theory very well. He also wrote a FAQ.
One great advantage of this theory is that what it describes actually exists even if you prefer to use the word "morality" to mean something else. Even a community of paperclip maximizers may find something in it to be relevant, amazingly enough.