You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Bayeslisk comments on Open thread, August 5-11, 2013 - Less Wrong Discussion

3 Post author: David_Gerard 05 August 2013 06:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread. Show more comments above.

Comment author: Bayeslisk 07 August 2013 07:10:52PM 0 points [-]

I'm not sure why you're both hung up on that the things hypothetical-me is interacting with need be human. Manfred: I address a similar entity in a different post. Adele_L: ...and?

Comment author: Adele_L 07 August 2013 10:05:34PM 0 points [-]

You said this:

I'm utterly convinced that the happiness of some people ought to count negatively

In this context, 'people' typically refers to a being with moral weight. What we know about morality comes from our intuitions mostly, and we have an intuitive concept 'person' which counts in some way morally. (Not necessarily a human, sentient aliens probably count as 'people', perhaps even dolphins.) Defining an arbitrary being which does not correspond to this intuitive concept needs to be flagged as such, as a warning that our intuitions are not directly applicable here.

Anyway, I get that you are basically trying to make a utility function with revenge. This is certainly possible, but having negative utility functions is a particularly bad way to do it.

Comment author: Bayeslisk 07 August 2013 10:10:28PM 0 points [-]

I was putting an upper bound on (what I thought at the time as) how negative the utility vector dot product would have to be for me to actually desire them to be unhappy. As to the last part, I am reconsidering this as possibly generally inefficient.