Nominull comments on Morality is not about willpower - Less Wrong

9 Post author: PhilGoetz 08 October 2011 01:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (144)

You are viewing a single comment's thread.

Comment author: Nominull 07 October 2011 03:49:38PM 3 points [-]

Humans don't make decisions based primarily on utility functions. To the extent that the Wise Master presented that as a descriptive fact rather than a prescriptive exhortation, he was just wrong on the facts. You can model behavior with a set of values and a utility function, but that model will not fully capture human behavior, or else will be so overfit that it ceases to be descriptive at all (e.g. "I have utility infinity for doing the stuff I do and utility zero for everything else" technically predicts your actions but is practically useless.)

You say that if humans don't implement utility functions there's no point to reading Less Wrong. I disagree, but in any case, that doesn't seem like an argument that humans implement utility functions. This argument seems more like an appeal to emotion, we are Less Wrongers who have some fraction of our identity connected to this site, so you want us to reject this proposition because of the emotional cost of the conclusions it brings about. Logically, though, it makes little sense to take the meaningfulness of Less Wrong as given and use that to reason about human cognition. That's begging the question.

Comment author: PhilGoetz 07 October 2011 11:34:12PM *  -2 points [-]

Nobody said that humans implement utility functions. Since I already said this, all I can do is say it again: Values, and utility functions, are both models we construct to explain why we do what we do. Whether or not any mechanism inside your brain does computations homomorphic to utility computations is irrelevant. [New edit uses different wording.]

Saying that humans don't implement utility functions is like saying that the ocean doesn't simulate fluid flow, or that a satellite doesn't compute a trajectory.

Comment author: Nominull 08 October 2011 12:18:31AM 2 points [-]

It's more like saying a pane of glass doesn't simulate fluid flow, or an electron doesn't compute a trajectory.

Comment author: wedrifid 08 October 2011 03:21:31AM 1 point [-]

It's more like saying a pane of glass doesn't simulate fluid flow

Which would be way off!

Comment author: rabidchicken 08 October 2011 04:28:52AM 0 points [-]

Does it flow, or simulate a flow?

Comment author: wedrifid 08 October 2011 04:39:13AM 1 point [-]

Neither.

Comment author: PhilGoetz 08 October 2011 02:07:04AM 0 points [-]

So how would you define rationality? What are you trying to do, when you're trying to behave rationally?

Comment author: Jack 08 October 2011 08:28:06AM 1 point [-]

Values, and utility functions, are both models we construct to explain why we do what we do.

Indeed, and a model which treats fuzzies and utils as exchangeable is a poor one.

Comment author: PhilGoetz 08 October 2011 03:07:21PM -1 points [-]

You could equally well analyze the utils and the fuzzies, and find subcategories of those, and say they are not exchangable.

The task of modeling a utility function is the task of finding how these different things are exchangeable. We know they are exchangable, because people have preferences between situations. They eventually do one thing or the other.