AndrewKemendo comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread.

Comment author: AndrewKemendo 11 November 2009 12:34:23PM 1 point [-]

Since you and most around here seem to be utilitarian consequentialists, how much thought have you put into developing your personal epistemological philosophy?

Worded differently, how have you come to the conclusion that "maximizing utility" is the optimized goal as opposed to say virtue seeking?

Comment author: Eliezer_Yudkowsky 11 November 2009 06:55:52PM 4 points [-]

how much thought have you put into developing your personal epistemological philosophy?

...very little, you know me, I usually just wing that epistemology stuff...

(seriously, could you expand on what this question means?)

Comment author: AndrewKemendo 12 November 2009 02:06:19AM 0 points [-]

Ha, fair enough.

I often see reference to maximizing utility and individual utility functions in your writing and it would seem to me (unless I am misinterpreting your use) that you are implying that hedonic (fellicific) calculation is the most optimal way to determine what is correct when applying counterfactual outcomes to optimizing decision making.

I am asking how you determined (if that is the case) that the best way to judge the optimality of decision making was through utilitarianism as opposed to say ethical egoism or virtue (not to equivocate). Or perhaps your reference is purely abstract and does not invoke the fellicific calculation.

Comment author: Nick_Tarleton 12 November 2009 02:08:36AM 1 point [-]

hedonic (fellicific) calculation

See Not For The Sake of Happiness (Alone).

I am asking how you determined (if that is the case) that the best way to judge the optimality of decision making was through utilitarianism as opposed to say ethical egoism or virtue (not to equivocate).

See The "Intuitions" Behind "Utilitarianism" for a partial answer.

Comment author: AndrewKemendo 12 November 2009 06:29:01AM 0 points [-]

Yes I remember reading both and scratching my head because both seemed to beat around the bush and not address the issues explicitly. Both lean to much on addressing the subjective aspect of non-utility based calculations, which in my mind is a red herring.

Admittedly I should have referenced it and perhaps the issue has been addressed as well as it will be. I would rather see this become a discussion as in my mind it is more important than any of the topics dealt with daily here - however that may not be appropriate for this particular thread.

Comment author: CronoDAS 12 November 2009 07:14:15AM 2 points [-]

"Preference satisfaction utilitarianism" is a lot closer to Eliezer's ethics than hedonic utilitarianism. In other words, there's more important things to maximize than happiness.

Comment author: Psy-Kosh 11 November 2009 01:47:31PM 2 points [-]

*blinks* I'm curious as to what it is you are asking. A utility function is just a way of encoding and organizing one's preferences/values. Okay, there're a couple additional requirements like internal consistency (if you prefer A to B and B to C, you'd better prefer A to C) and such, but other than that, it's just a convenient way of talking about one's preferences.

The goal isn't "maximize utility", but rather "maximizing utility" is a way of stating what it is you're doing when you're working to achieve your goals. Or did I completely misunderstand?

Comment author: Johnicholas 11 November 2009 05:44:01PM 1 point [-]

I think there has to be more to utility function talk than "convenience" - for one thing, it's not more convenient than preference talk, in general. Consider an economic utility function, valuing bundles of apples and oranges. If someone's preferences are summarizable by U(apples, oranges)=sqrt(apples*oranges), that might be convenient, but there's no free lunch. No compression can be achieved without assumptions about the prior distribution. Believing that preferences tend to have terse expressions in functional talk is a claim about the actual distribution of preferences in the world. The belief that maximizing utility is a perspicuous way of expressing "behave correctly" is something that one has to have evidence for.

My (very partial) understanding of virtue morality is that virtue ethicists believe that "behave correctly" is well expressed in terms of virtues.

Comment author: Psy-Kosh 11 November 2009 06:32:24PM 1 point [-]

I didn't mean convenient in the sense of compressibility, but convenient in the sense of representing our preference ordering in a form that lets one then talk about stuff like "how can I get the world into the best possible state, where 'best' is in terms of my values?" in terms of maximizing utility, and when combined with uncertainty, maximizing expected utility.

I just meant "utility doesn't automatically imply a specific set of values/virtues. It's more a way of organizing your virtues so that you can at least formally define optimal actions, giving you a starting point to look for ways to approximately compute such things, etc.."

Or did I misunderstand your point completely?

Comment author: Johnicholas 11 November 2009 08:39:16PM 3 points [-]

The phrase "how can I get the world into the best possible state" is explicitly consequentialist. Non-consequentialists (e.g. "The end does not justify the means") do not admit that correct behavior is getting the world into the best possible state.

Non-utilitarians probably perceive suggestions of maximizing utility, maximizing expected utility, and (in particular) approximating those two as very dangerous and likely to lead to incorrect behavior.

The original poster implied that there is a difference between seeking to maximize utility and (for example) virtue seeking. I'm trying to explain in what sense the original poster had a real point. Not everyone is a utilitarian, and saying "in principle, I could construct a utility function from your preferences" doesn't make everyone a utilitarian.

Comment author: Psy-Kosh 11 November 2009 08:44:01PM 0 points [-]

Really, the non-consequentialism can be rephrased as a consequentialist philosophy by simply including the means, ie, the history, as part of the "state"... ie, assigning lower value to getting to a certain state by bad methods vs good methods.

Or am I still not getting it?

Comment author: Johnicholas 11 November 2009 09:00:39PM 1 point [-]

Yes, it's possible to encode the nonconsequentialism or "nonutilitarianism" into the utility function. However, by doing so you're making the utility function inconvenient to work with. You can't simultaneously claim that the utility function is "simply" an encoding of people's preferences and ALSO that the utility function is convenient or preferable.

Then you go and approximate the (uglified) utility function! Put yourself in the virtue theorist's or Kantian's shoes. It certainly sounds to me like you're planning to discard their concerns regarding moral/ethical/correct behavior.

(Note: I don't actually understand virtue ethics at all, so I might be getting this entirely wrong.) Imagine the virtue ethicist saying "Your concerns can be encoded into the virtue of "achieves a desirable goal", and will be included in our system along with the other virtues," Would you want to know WHY the system is being built with virtues at the bottom and consequentialism as an encoding? Would your questions make sense?

Comment author: Psy-Kosh 11 November 2009 09:18:10PM 0 points [-]

It's "convenient" in the sense of giving us a general way of talking about how to make decisions. It's "convenient" in that it is set up in such a way to encode not just what you prefer more than other stuff, but how much more, etc...

Lets us then also take advantage of whatever decision theory theorems have been proven, and so on...

As far as "virtue of achieving a desirable goal", "desirable", "virtue", and "achieving" would be doing all the heavy lifting there. :)

But really, my point was simply the original comment was stated in such a way as to imply "maximizing utility" was itself a moral philosophy, ie, the sort of thing that you could say "I consider that immoral, and instead care about personal virtue". I was simply saying "huh? utility stuff is just a way of talking about whatever values you happen to have. It's not, on its own, a specific set of values. It's like, I guess, saying 'what if I don't believe in math and instead believe in electromagnetism?'"

Comment author: AndrewKemendo 12 November 2009 02:31:26AM *  -1 points [-]

You'll have to forgive me because I am economist by training and mentions of utility have very specific references to Jeremy Bentham.

Your definition of what the term "maximizing utility" means and the Bentham definition (who was the originator) are significantly different; If you don't know what it is then I will describe it (if you do, sorry for the redundancy).

Jeremy Bentham devised Felicific calculus which is a hedonistic philosophy and seeks as its defining purpose to maximize happiness. He was of the opinion that it was possible in theory to create a literal formula which gives optimized preferences such that it maximized happiness for the individual. This is the foundation for all utilitarian ethics as each seeks to essentially itemize all preferences.

Virtue ethics for those who do not know is the Aristotelian philosophy that posits: each sufficiently differentiated organism or object is naturally optimized for at least one specific purpose above all other purposes. Optimized decision making for a virtue theorist would be doing the things which best express or develop that specific purpose - similar to how specialty tools are best used for their specialty. Happiness is said to spring forth from this as a consequence, not as it's goal.

I just want to know, if it is the case that he came to follow the former (Bentham) philosophy, how he came to that decision (theoretically it is possible to combine the two).

So in this case, while the term may give an approximation of the optimal decision, if used in that manner is not explicitly clear in how it determines the basis for the decision is in the first place; that is unless, as some have done, it is specified that maximizing happiness is the goal (which I had just assumed people were asserting implicitly anyhow).

Comment author: Psy-Kosh 12 November 2009 04:03:59AM 0 points [-]

Okay, I was talking about utility maximization in the decision theory sense. ie, computations of expected utility, etc etc...

As far as happiness being The One True Virtue, well, that's been explicitly addressed

Anyways, "maximize happiness above all else" is explicitly not it. And utility, as discussed on this site is a reference to the decision theoretic concept. It is not a specific moral theory at all.

Now, the stuff that we consider morality would include happiness as a term, but certainly not as the only thing.

Virtue ethics, as you describe it, gives me an "eeew" reaction, to be honest. It's the right thing to do simply because it's what you were optimized for?

If I somehow bioengineer some sort of sentient living weapon thing, is it actually the proper moral thing for that being to go around committing mass slaughter? After all, that's what it's "optimized for"...

Comment author: AndrewKemendo 12 November 2009 02:31:40AM -1 points [-]

Thanks, I followed up below.