A third possibility: Humans aren't in general capable of accurately reflecting on their preferences.
Three is pretty much like one. If utility functions work, there must be some way of figuring them out, I hoped someone figured it out already.
If utility functions are a bad match for human preferences, that would seem to imply that humans simply tend not to have very consistent preferences. What major premise does this invalidate?
Utilitarian model being wrong doesn't necessarily mean that a different model based on different assumptions doesn't exist. I don't know which assumptions need to be broken.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Re violence, do see "Bayesans vs. Barbarians."
I hope to post on that post shortly, after giving it some thought.
Gris, just as bias against violence may be the reason it's hardly ever considered, alternatively, it may not only be a rational position, but a strategically sensible one. Please consider looking at the literature concerning strategic nonviolence. The substantial literature at the Albert Einstein Institute is good for understanding nonviolent strategy and tactics against regimes, and the insights provided translate into courses of action in other conflicts, as well.