handoflixue comments on What is Metaethics? - Less Wrong

31 Post author: lukeprog 25 April 2011 04:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (550)

You are viewing a single comment's thread. Show more comments above.

Comment author: handoflixue 06 May 2011 08:02:44PM *  2 points [-]

I recently got a raise. This freed up my finances to start doing SCUBA diving. SCUBA diving benefits heavily from me being in shape.

I now have a strong preference for losing weight, and reinforced my preference for exercise, because the gains from both activities went up significantly. This also resulted in having a much lower preference for certain types of food, as they're contrary to these new preferences.

I'd think that's a pretty concrete example of changing my preferences, unless we're using different definitions of "preference."

Comment author: TimFreeman 06 May 2011 08:40:23PM 1 point [-]

I suppose we are using different definitions of "preference". I'm using it as a friendly term for a person's utility function, if they seem to be optimizing for something, or we say they have no preference if their behavior can't be understood that way. For example, what you're calling food preferences are what I'd call a strategy or a plan, rather than a preference, since the end is to support the SCUBA diving. If the consequences of eating different types of food magically changed, your diet would probably change so it still supported the SCUBA diving.

Comment author: handoflixue 06 May 2011 09:49:03PM 2 points [-]

Ahh, I re-read the thread with this understanding, and was struck by this:

I like using the word "preference" to include all the things that drive a person, so I'd prefer to say that your preference has two parts

It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.

Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.

Presumably anyone who wants a metaethical theory has a preference that would be maximized by discovering and obeying that theory. This would still be weighted against their existing other preferences, same as my preference for rationality has yet to eliminate akrasia or procrastination from my life :)

Does that make sense as a "motivation for wanting to change your preferences"?

Comment author: TimFreeman 06 May 2011 10:35:50PM *  2 points [-]

I agree that akrasia is a bad thing that we should get rid of. I like to think of it as a failure to have purposeful action, rather than a preference.

My dancing around here has a purpose. You see, I have this FAI specification that purports to infer everyone's preference and take as its utility function giving everyone some weighted average of what they prefer. If it infers that my akrasia is part of my preferences, I'm screwed, so we need a distinction there. Check http://www.fungible.com. It has a lot of bugs that are not described there, so don't go implementing it. Please.

In general, if the FAI is going to give "your preference" to you, your preference had better be something stable about you that you'll still want when you get it.

If there's no fix for akrasia, then it's hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I'm spewing BS about stuff that sounds nice to do, but I really don't want to do it. I certainly would want an akrasia fix if it were available. Maybe that's the important preference.

Comment author: TheOtherDave 06 May 2011 11:02:02PM 2 points [-]

If there's no fix for akrasia, then it's hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I'm spewing BS about stuff that sounds nice to do, but I really don't want to do it.

Very much agreed.

Comment author: TimFreeman 06 May 2011 11:23:19PM 0 points [-]

It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.

Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.

At the end of the day, you're going to prefer one action over another. It might make sense to model someone as having multiple utility functions, but you also have to say that they all get added up (or combined some other way) so you can figure out the immediate outcome with the best preferred expected long-term utility and predict the person is going to take an action that gets them there.

Comment author: handoflixue 07 May 2011 12:26:01AM 1 point [-]

I don't think very many people actually act in a way that suggests consistent optimization around a single factor; they optimize for multiple conflicting factors. I'd agree that you can evaluate the eventual compromise point, and I suppose you could say they optimize for that complex compromise. For me, it happens to be easier to model it as conflicting desires and a conflict resolution function layered on top, but I think we both agree on the actual result, which is that people aren't optimizing for a single clear goal like "happiness" or "lifetime income".

predict the person

Prediction seems to run in to the issue that utility evaluations change over time. I used to place a high utility value on sweets, now I do not. I used to live in a location where going out to an event had a much higher cost, and thus was less often the ideal action. So on.

It strikes me as being rather like weather: You can predict general patterns, and even manage a decent 5-day forecast, but you're going to have a lot of trouble making specific long-term predictions.