JGWeissman comments on Averaging value systems is worse than choosing one - Less Wrong

5 Post author: PhilGoetz 29 April 2010 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 29 April 2010 05:47:42PM 0 points [-]

... choosing a utility function that is easy to maximize ...

Where in the TLP do you see this?

Phil is trying to find a combined value system that minimizes conflicts between values. This would allow tradeoffs to be avoided. (Figuring out which tradeoffs to make when your actual values conflict is a huge strength of utility functions.) Do you see another reason to be interested in this comparison of value system combinations?

I understand, and I did respond to that question.

Do you talk about all useful things in all contexts? Otherwise, how is an explanation of why it is valuable a reasonable response to a question about what you did in a specific context?

Do you have to respond to everything with an inane question? Your base level question has been answered.

If you think so ...

Do you actually see this as controversial?

I see it as an unsupported claim. I see this question as useless rhetoric that distracts from your claims lack of support, and the points I was making. So, let's bring this back to the object level. Do you see a scenario where a group of ideally rational agents would want to combine their utility functions using this procedure? If you think it is only useful for more general agents to cope with their irrationality, do you see a scenario where a group of ideally rational agents who each care about a different general agent (and want the general agent to be effective at maximising its own fixed utility function) would advise the general agents they care about to combine their utility functions in this manner?

It is not a description of his concept.

A concept that "can be extended to cover situations other than the most idealized ones" is your description of Phil's concept contained in your question. It would make this discussion a lot easier if you did not flatly deny reality.

It is a question about your grounds for dismissing his model without any explanation.

Do you always accuse people of dismissing models without explanation when they in fact have dismissed a model with an explanation? (If you forgot, the explantion is that the model is trying to figure which combined value system/utility function is easiest to satisfy/maximise instead of which one best represents the input value systems/utility functions that represent the actual values of the group members.)

How do you like being asked questions which contain assumptions you disagree with?

Comment author: jimmy 29 April 2010 11:36:41PM 4 points [-]

I think it'd be a good policy to answer the question before discussing why it might be misguided. If you don't answer the question and only talk about it, you end up running in circles and not making progress.

For example:

Instead of

Is there always a problem with a concept if it can be extended to cover situations other than the most idealized ones?

I deny that this is an accurate description of Phil's concept....

It is not a description of his concept. It is a question about your grounds for dismissing his model without any explanation.

A concept that "can be extended to cover situations other than the most idealized ones" is your description of Phil's concept contained in your question

It could be

Is there always a problem with a concept if it can be extended to cover situations other than the most idealized ones?

No, of course not. I deny that this is an accurate description of Phil's concept....

Well, I think it is because of X...