Gunnar_Zarncke comments on To capture anti-death intuitions, include memory in utilitarianism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (34)
Under your solution, every life created implies infinite negative utility. Due to thermodynamics or whatever (big rip? other cosmological disaster that happens before heat death?) we can't keep anyone alive forever. No matter how slow the rate of disutility accumulation, the infinite time after the end of all sentience makes it dominate everything else.
If I understand you correctly, then your solution is that the utility function actually changes every time someone is created, so before that person is created, you don't care about their death. One weird result of this is that if there will soon be a factory that rapidly creates and then painlessly destroys people, we don't object (And while the factory is running, we are feeling terrible about everything that has happened in it so far, but we still don't care to stop it). Or to put it in less weird terms, we won't object to spreading some kind of poison which affects newly developing zygotes, reducing their future lifespan painlessly.
There's also the incentive for an agent with this system to self-modify to stop changing their utility function over time.
That's true, but note that if e.g. 20 billion people have died up to this point, then that penalty of -20 billion gets applied equally to every possibly future state, so it won't alter the relative ordering of those states. So the fact that we're getting an infinite amount of disutility from people who are already dead isn't a problem.
Though now that you point it out, it is a problem that, under this model, creating a person who you don't expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that's ethics for you. :)
That's an interesting idea, but it wasn't what I had in mind. As you point out, there are some pretty bad problems with that model.
It only breaks that specific choice of memory UFU. The general approach admits lots of consistent functions.
That's true.
I wonder whether professional philosophers have made any progress with this kind of an approach? At least in retrospect it feels rather obvious, but I don't recall hearing anyone mention something like this before.
It's not unusual to count "thwarted aims" as a positive bad of death (as I've argued for myself in my paper Value Receptacles), which at least counts against replacing people with only slightly happier people (though still leaves open that it may be worthwhile to replace people with much happier people, if the extra happiness is sufficient to outweigh the harm of the first person's thwarted ends).
Philosophers are screwed nowadays. If they apply the scientific method and reductionism to social sciences topics they cut away too much. If they stay with vague notions which do not cut away the details they are accused of being vague. The vaguesness is there for a reason: It it kind of abstraction of the essential complexity of the abstracted domain.