Isn't this equivalent to total utilitarianism that only takes into account the utility of already extant people? Also, isn't this inconsistent over time (someone who used this as their ethical framework could predict specific discontinuities in their future values)?
I suppose you could say that it's equivalent to "total utilitarianism that only takes into account the utility of already extant people, and only takes into account their current utility function [at the time the decision is made] and not their future utility function".
(Under mere "total utilitarianism that only takes into account the utility of already extant people", the government could wirehead its constituency.)
Yes, this is explicitly inconsistent over time. I actually would argue that the utility function for any group of people will be inconsistent over time (as preferences evolve, new people join, and old people leave) and any decision-making framework needs to be able to handle that inconsistency intelligently. Failure to handle that inconsistency intelligently is what leads to the Repugnant Conclusions.
It's not obvious that you've gained anything here. We can reduce to total utilitarianism -- just assume that everyone's utility is zero at the decision point. You still have the repugnant conclusion issue where you're trying to decide whether to create more people or not based on summing utilities across populations.
My intended solution was that, if you check the utility of your constituents from creating more people, you're explicitly not taking the utility of the new people into account. I'll add a few sentences at the end of the article to try to clarify this.
Another thing I can say is that, if you assume that everyone's utility is zero at the decision point, it's not clear why you would see a utility gain from adding more people.
Total Utility is Illusionary
(Abstract: We have the notion that people can have a "total utility" value, defined perhaps as the sum of all their changes in utility over time. This is usually not a useful concept, because utility functions can change. In many cases the less-confusing approach is to look only at the utility from each individual decision, and not attempt to consider the total over time. This leads to insights about utilitarianism.)
Let's consider the utility of a fellow named Bob. Bob likes to track his total utility; he writes it down in a logbook every night.
Bob is a stamp collector; he gets +1 utilon every time he adds a stamp to his collection, and he gets -1 utilon every time he removes a stamp from his collection. Bob's utility was zero when his collection was empty, so we can say that Bob's total utility is the number of stamps in his collection.
One day a movie theater opens, and Bob learns that he likes going to movies. Bob counts +10 utilons every time he sees a movie. Now we can say that Bob's total utility is the number of stamps in his collection, plus ten times the number of movies he has seen.
(A note on terminology: I'm saying that Bob's utility function is the thing that emits +1 or -1 or +10, and his total utility is the sum of all those emits over time. I'm not sure if this is standard terminology.)
This should strike us as a little bit strange: Bob now has a term in his total utility which is mostly based on history, and mostly independent of the present state of the world. Technically, we might handwave and say that Bob places value on his memories of watching those movies. But Bob knows that's not actually true: it's the act of watching the movies that he enjoys, and he rarely thinks about them once they're over.
If a hypnotist convinced Bob that he had watched ten billion movies, Bob would write down in his logbook that he had a hundred billion utilons. (Plus the number of stamps in his stamp collection.)
Let's talk some more about that stamp collection. Bob wakes up on June 14 and decides that he doesn't like stamps any more. Now, Bob gets -1 utilon every time he adds a stamp to his collection, and +1 utilon every time he removes one. What can we say about his total utility? We might say that Bob's total utility is the number of stamps in his collection at the start of June 14, plus ten times the number of movies he's watched, plus the number of stamps he removed from his collection after June 14. Or we might say that all Bob's utility from his stamp collection prior to June 14 was false utility, and we should strike it from the record books. Which answer is better?
...Really, neither answer is better, because the "total utility" number we're discussing just isn't very useful. Bob has a very clear utility function which emits numbers like +1 and +10 and -1; he doesn't gain anything by keeping track of the total separately. His total utility doesn't seem to track how happy he actually feels, either. It's not clear what Bob gains from thinking about this total utility number.
I think some of the confusion might be coming from Less Wrong's focus on AI design.
When you're writing a utility function for an AI, one thing you might try is to specify your utility function by specifying the total utility first: you might say "your total utility is the number of balls you have placed in this bucket" and then let the AI work out the implementation details of how happy each individual action makes it.
However, if you're looking at utility functions for actual people, you might encounter something weird like "I get +10 utility every time I watch a movie", or "I woke up today and my utility function changed", and then if you try to compute the total utility for that person, you can get confused.
Let's now talk about utilitarianism. For simplicity, let's assume we're talking about a utilitarian government which is making decisions on behalf of its constituency. (In other words, we're not talking about utilitarianism as a moral theory.)
We have the notion of total utilitarianism, in which the government tries to maximize the sum of the utility values of each of its constituents. This leads to "repugnant conclusion" issues in which the government generates new constituents at a high rate until all of them are miserable.
We also have the notion of average utilitarianism, in which the government tries to maximize the average of the utility values of each of its constituents. This leads to issues -- I'm not sure if there's a snappy name -- where the government tries to kill off the least happy constituents so as to bring the average up.
The problem with both of these notions is that they're taking the notion of "total utility of all constituents" as an input, and then they're changing the number of constituents, which changes the underlying utility function.
I think the right way to do utilitarianism is to ignore the "total utility" thing; that's not a real number anyway. Instead, every time you arrive at a decision point, evaluate what action to take by checking the utility of your constituents from each action. I propose that we call this "delta utilitarianism", because it isn't looking at the total or the average, just at the delta in utility from each action.
This solves the "repugnant conclusion" issue because, at the time when you're considering adding more people, it's more clear that you're considering the utility of your constituents at that time, which does not include the potential new people.
My theory is that Lucius trumped up these charges against Hermione entirely independent of the midnight duel. He was furious that Hermione defeated Draco in combat, and this is his retaliation.
I doubt that Hermione attended the duel; or, if she did attend it, I doubt that anything bad happened.
My theory does not explain why Draco isn't at breakfast. So maybe my theory is wrong.
I am confused about why H&C wanted Hermione to be defeated by Draco during the big game when Lucius was watching. If you believe H&C is Quirrell (and I do): did Quirrell go to all that trouble just to impress Lucius with how his son was doing? That seems like an awful risk for not much reward.
...Followup: Holy crap! I know exactly one person who wants Hermione to be defeated by Draco when Lucius is watching. Could H&C be Dumbledore?
My theory is that Lucius trumped up these charges against Hermione entirely independent of the midnight duel. He was furious that Hermione defeated Draco in combat, and this is his retaliation.
I doubt that Hermione attended the duel; or, if she did attend it, I doubt that anything bad happened.
My theory does not explain why Draco isn't at breakfast. So maybe my theory is wrong.
I am confused about why H&C wanted Hermione to be defeated by Draco during the big game when Lucius was watching. If you believe H&C is Quirrell (and I do): did Quirrell go to all that trouble just to impress Lucius with how his son was doing? That seems like an awful risk for not much reward.
The new Update Notifications features (http://hpmor.com/notify/) is pretty awesome but I have a feature request. Could we get some sort of privacy policy for that feature?
Like, maybe just a sentence at the bottom saying "we promise to only use your email address to send you HPMOR notifications, and we promise never to share your email address with a third party"?
It's not that I don't trust you guys (and in fact I have already signed up) but I like to check on these things.
I found this series much harder to enjoy than Eliezer's other works -- for example the Super Happy People story, the Brennan stories, or the Sword of Good story.
I think the issue was that Harry was constantly, perpetually, invariably reacting to everything with shock and outrage. It got... tiresome.
At first, before I knew who the author was, I put this down to simple bad writing. Comments in Chapter 6 suggest that maybe Harry has some severe psychological issues, and that he's deliberately being written as obnoxious and hyperactive in order to meet plot criteria later.
But it's still sort of annoying to read.
I did enjoy the exchange with Draco in Chapter 5, mind.
(I encountered the series several weeks ago, without an attribution for the author. I read through Chapter 6 and stopped. Now that I know it was by Eliezer, I may go back and read a few more chapters.)
I think the issue was that Harry was constantly, perpetually, invariably reacting to everything with shock and outrage. It got... tiresome.
But I went back much later and read it again, and there wasn't nearly as much outrage as I remembered.
Good story!
I wonder if Eliezer has or should read this review of Ender's Game (a book I never read myself, but the reviewer seems to provide a useful warning to authors).
Ouch! I -- I actually really enjoyed Ender's Game. But I have to admit there's a lot of truth in that review.
Now I feel vaguely guilty...
I found this series much harder to enjoy than Eliezer's other works -- for example the Super Happy People story, the Brennan stories, or the Sword of Good story.
I think the issue was that Harry was constantly, perpetually, invariably reacting to everything with shock and outrage. It got... tiresome.
At first, before I knew who the author was, I put this down to simple bad writing. Comments in Chapter 6 suggest that maybe Harry has some severe psychological issues, and that he's deliberately being written as obnoxious and hyperactive in order to meet plot criteria later.
But it's still sort of annoying to read.
I did enjoy the exchange with Draco in Chapter 5, mind.
(I encountered the series several weeks ago, without an attribution for the author. I read through Chapter 6 and stopped. Now that I know it was by Eliezer, I may go back and read a few more chapters.)
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If you look at the sum of all of the actions if you choose option A minus the sum of all the actions if you take option B, then all of the actions until then will cancel out, and you get just the difference in utility between option A and option B. They're equivalent.
Technically, delta utilitarianism is slightly more resistant to infinities. As long as any two actions have a finite difference, you can calculate it, even if the total utility is infinite. I don't think that would be very helpful.
I think the key difference is that delta utilitarianism handles it better when the group's utility function changes. For example, if I create a new person and add it to the group, that changes the group's utility function. Under delta utilitarianism, I explicitly don't count the preferences of the new person when making that decision. Under total utilitarianism, [most people would say that] I do count the preferences of that new person.