@Unknown
So if everyone is a deontologist by nature, shouldn't a "normalization" of intuitions result in a deontological system of morals? If so, what makes you look for the right utilitarian system?
@Sean
If your utility function u was replaced by 3u,there would be no observable difference in your behavior. So which of these functions is declared real and goes on to the interpersonal summing? "The same factor for everyone" isn't an answer, because if u_you doesn't equal u_me "the same factor" is simply meaningless.
@Tomhs2
A < B < C < D doesn't imply that there's some k such that kA>D
Yes it does.
I think you're letting the notation confuse you. It would imply that, if A,B,C,D where e.g. real numbers, and that is the context the "<"-sign is mostly used in. But Orders can exist on sets other then sets of numbers. You can for example sort (order) the telephone book alphabetically, so that Cooper < Smith and still there is no k so that k*Cooper>Smith.
@most people here:
A lot of confusion is caused by the unspoken premise that a moral system should sort outcomes rather then actions, so that it doesn't matter who would do the torturing or speck-placing. Now for Eliezer that assumption is de fide, because otherwise the concept of a friendly AI (sharing our ends and choosing the unimportant-declared means with its superior intelligence) is meaningless. But the assumption contradicts basically everyone's intuition. So why should it convince anyone not following Eliezer's religion?
[Edit: fixed some typos and formating years later]
So what exactly do you multiply when you shut up and multiply? Can it be anything else then a function of the consequences? Because if it is a function of the consequences, you do believe or at least act as if believing your #4.
In which case I still want an answer to my previously raised and unanswered point: As Arrow demonstrated a contradiction-free aggregate utility function derived from different individual utility functions is not possible. So either you need to impose uniform utility functions or your "normalization" of intuition leads to a logical contradiction - which is simple, because it is math.
1. In this whole series of posts you are silently presupposing that utilitarianism is the only rational system of ethics. Which is strange, because if people have different utility functions Arrow's impossibility theorem makes it impossible to arrive at a "rational" (in this blogs bayesian-consistent abuse of the term) aggregate utility function. So irrationality is not only rational but the only rational option. Funny what people will sell as overcoming bias.
2. In this particular case the introductory example fails, because 1 killing != - 1 saving. Removing a drowning man from the pool is obviously better then merely abstaining from drowning an other man in the pool.
3. The feeling of superiority over all those biased proles is a bias. In fact it is very obviously among your main biases and consequently one you should spend a disproportional amount of resources on overcoming.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
@Eisegates
Yes, I was operating on the implicit convention, that true statements must be meaningfull, so I could also say there is no k, so that I have exactly k quobbelwocks.
The nonexistence of a *-operator (and of a +-operator) is actually the point. I don't think preferences of different persons can be meaningfully combined, and that includes, that {possible world-states} or {possible actions} don't, in your formulation, contain the sort of objects to which our everyday understanding of multiplication normally applies. Now if you insist on an intuitively defined *-operator every bounded utility function is an example. For example my utility for the amount c of chocolate available for consumption in some given timeframe could well be approximately 1- exp(1-(min(c/1kg,1)), so 100g<1kg but there is no k to make k*100g>1kg. That is, of course, nothing new even in this discussion. Also more directly to the point, me doing evil is something I should avoid more then other people doing evil. So when I do the choosing "I kill 1 innocent person" < "someone else kills 1 innocent person", but there is no k so that "I kill 1 innocent person"> "someone else kills k innocent persons". In fact, if a kidnapper plausibly threatened to kill his k hostages unless I kill a random passerby almost nobody would think me justified in doing so for an imaginable value of k. That people may think different for unimaginably large values of k is a much more plausible candidate for failure to be rational whit large numbers then not adding speckles up to torture.
But basically I wasn't making a claim, just trying to give an understandable (or so I thought) formulation for denying Thombs' non-technically stated claim that existence of an order implies the Archimedian axiom.
@Bob
If it's true, and you seem to agree, that our intuition focuses on actions over outcomes, don't you think that's a problem? Perhaps you're not convinced that our intuition reflects a bias? That we'd make better decisions if we shifted a little bit of our attention to outcomes?
You nailed it. Not only am I not convinced, that our intuition on this point reflects a bias, I'm actually convinced, that it doesn't. Utility is irrelevant, rights are relevant. And while I may sacrifice a lesser right for a greater right I can't sacrifice a person for another person. So in the torture example I may not flip the (50a,1 person/49a, 2 persons)switch either way.
@Doug S.
I disagree. An objective U doesn't exist and individual Us can't be meaningfully aggregated. Moreover, if the individual Us are meant to be von-Neumann-Morgenstern-functions they don't exist either.