Adele_L comments on Open thread, August 5-11, 2013 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (307)
I haven't explored that idea; can you be more specific about what this idea might bring to the table?
Are you sure? You believe there are some people for which the morally right thing to do is to inflect as much misery and suffering as you can, keeping them alive so you can torture them forever, and there is not necessarily even a benefit to yourself or anyone else to doing this?
The negative utility need not be boundless or even monotonic. A coherent preference system could count a modest amount of misery experienced by people fitting certain criteria to be positive while extreme misery and torture of the same individual is evaluated negatively.
I also will upvote posts that have been downvoted too much, even if I wouldn't have upvoted them if they were at 0.
Trivially, nega-you who hates everything you like (oh, you want to put them out of their misery? Too bad they want to live now, since they don't want what you want). But such a being would certainly not be a human.
This is not a being in the reference class "people".
I'm not sure why you're both hung up on that the things hypothetical-me is interacting with need be human. Manfred: I address a similar entity in a different post. Adele_L: ...and?
You said this:
In this context, 'people' typically refers to a being with moral weight. What we know about morality comes from our intuitions mostly, and we have an intuitive concept 'person' which counts in some way morally. (Not necessarily a human, sentient aliens probably count as 'people', perhaps even dolphins.) Defining an arbitrary being which does not correspond to this intuitive concept needs to be flagged as such, as a warning that our intuitions are not directly applicable here.
Anyway, I get that you are basically trying to make a utility function with revenge. This is certainly possible, but having negative utility functions is a particularly bad way to do it.
I was putting an upper bound on (what I thought at the time as) how negative the utility vector dot product would have to be for me to actually desire them to be unhappy. As to the last part, I am reconsidering this as possibly generally inefficient.