PhilGoetz comments on Averaging value systems is worse than choosing one - Less Wrong

5 Post author: PhilGoetz 29 April 2010 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

You are viewing a single comment's thread. Show more comments above.

Comment author: Peter_de_Blanc 30 April 2010 01:10:42AM 4 points [-]

In the "moral values" domain, you're more likely to have discontinuous rules (e.g., "X is always bad", or "X<N is acceptable while X>N is not"), and be performing logical inference over them. This results in situations that you can't solve directly, and it can result in circular or indeterminate chains of reasoning, and multiple possible solutions.

This line of thinking is setting off my rationalization detectors. It sounds like you're saying, "OK, I'll admit that my claim seems wrong in some simple cases. But it's still correct in all of the cases that are so complicated that nobody understands them."

I don't know how to distinguish moral values from other kinds of values, but it seems to me that this isn't exactly the distinction that would be most useful for you to figure out. My suggestion would be to figure out why you think high IC is bad, and see if there's some nice way to characterize the value systems that match that intuition.

Comment author: PhilGoetz 30 April 2010 01:43:53AM *  0 points [-]

My suggestion would be to figure out why you think high IC is bad, and see if there's some nice way to characterize the value systems that match that intuition.

That's a good idea. My "final reason" for thinking that high IC is bad may be because high-IC systems are a pain in the ass when you're building intelligent agents. They have a lot of interdependencies among their behaviors, get stuck waffling between different behaviors, and are hard to debug. But we (as designers and as intelligent agents) have mechanisms to deal with these problems; e.g., producing hysteresis by using nonlinear functions to sum activation from different goals.

My other final reason is that I consciously try to energy-minimize my own values, and I think other thoughtful people who aren't nihilists do too. Probably nihilists do too, if only for their own convenience.

My other other final reason is that energy-minimization is what dynamic network concepts do. It's how they develop, as e.g. for spin-glasses, economies, or ecologies.