Can you give an example of a case where they don't overlap, that PhilGoetz is arguing about?
Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
So, to give a concrete example, you have $10 dollars. You can choose between gaining 5 utilons today and five tomorrow by spending half of the money today and half of the money tomorrow, or between spending all of it today and gaining 10 utilons today and 0 tomorrow. These outcomes both give you equal numbers of utilons, so they're equal.
Phil says that the moral reason they're both equal is because they both have the same amount of average utility distributed across instances of you. He then uses that as a reason that average utilitarianism is correct across different people, since there's nothing special about you.
However, an equally plausible interpretation is that the reason they are morally equal in the first instance is because the aggregate utilities are the same. Although average utilitarianism and aggregate utilitarianism overlap when N = 1, in many other cases they disagree. Average utilitarianism would rather have one extremely happy person than twenty moderately happy people, for example. This disagreement means that average and aggregate utilitarianism are not the same (as well as the fact that they have different metaethical justifications which are used as support), which means he's not justified in either his initial privileging of average utilitarianism or his extrapolation of it to large groups of people.
I'd just like to say that your complaints about length are pretty funny in their ironic stupidity.
I said that length was useful insofar as it added to communication. Was I particularly inefficient? I don't think so. As is, it's somewhat ironic, but I think only superficially so because there isn't any real clash between what I claim as ideal and what I engage in (because, again, I think I was efficient). And there's not stupidly there at all, or at least none that I see. You'll need to go into more detail here.
Also, you say changing the nature of the game like it's not important. It's like you want to play basketball back before they cut the bottoms out of baskets.
I understand what you're getting at, but what specifically is important about this change? I see the added resource intensity as one thing but that's all I can think of whereas I'm reading your comment as hinting at some more fundamental change that's taking place.
(A few seconds later, my thoughts.)
One change might be that the goals have shifted. It becomes about status and not about solving problems. Maybe that is what you had in mind? Or something else?
Yes, at some level one can interpret Kant as saying something like "use decision theory, not game theory."
Quick Question, a few weeks later: would you be willing to take a guess as to what problems might have caused my comment to be downvoted? I'm stumped.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I like the vibes.
I don't like this part. First, thinking that you're closER to the truth is not really a problem, it's thinking you've arrived at the truth that arguably is. Second, I think sometimes human beings can indeed find the truth. Underconfidence is just as much a sin as overconfidence, but referring to hubris in the way that you did seems like it would encourage false humility. I think you should say something more like "for every hundred ides professed to be indisputable truths, ninety nine are false", and maybe add something about how there's almost never good justification to refuse to even listen to other people's points of view.
I don't agree with this either, or most of the paragraph before it: there are strong trends.