whowhowho comments on A Sketch of an Anti-Realist Metaethics - Less Wrong

16 Post author: Jack 22 August 2011 05:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (136)

You are viewing a single comment's thread. Show more comments above.

Comment author: whowhowho 14 February 2013 04:22:59PM -2 points [-]

That presupposes something like utilitariansim. If something like deontology is true, then number-crunched solutions could involve unjustifiable violations of rights.

Comment author: DaFranker 14 February 2013 04:36:57PM *  2 points [-]

Could you humor me for an example? What would the universe look like if "deontology is true" versus a universe where "deontology is false"? Where is the distinction?

I don't see how a deontological system would prevent number-crunching. You just number-crunch for a different target: find the pareto optima that minimize the amount of rule-breaking and/or the importance of the rules broken.

Comment author: whowhowho 15 February 2013 12:00:00AM -1 points [-]

Could you humor me for an example? What would the universe look like if "deontology is true" versus a universe where "deontology is false"?

What would it be like if utilitarianism is true? Or the axiom of choice? Or the continuum hypothesis?

I don't see how a deontological system would prevent number-crunching.

I don't see how a description of the neurology of moral reasoning tells you how to crunch the numbers -- which decision theory you need to use to implement which moral theory to resolve conflicts in the right way.

Comment author: DaFranker 15 February 2013 01:04:45AM *  2 points [-]

What would it be like if utilitarianism is true?

This statement seems meaningless to me. As in "Utilitarianism is true" computes in my mind the exact same way as "Politics is true" or "Eggs are true".

The term "utilitarianism" encompasses a broad range of philosophies, but seems more commonly used on lesswrong as meaning roughly some sort of mathematical model for computing the relative values of different situations based on certain value assumptions about the elements of those situations and a thinghy called "utility function".

If this latter meaning is used, "utilitarianism is true" is a complete type error, just like "Blue is true" or "Eggs are loud". You can't say that the mathematical formulas and formalisms of utilitarianism are "true" or "false", they're just formulas. You can't say that "x = 5" is "true" or "false". It's just a formula that doesn't connect to anything, and that "x" isn't related to anything physical - I just pinpointed "x" as a variable, "5" as a number, and then declared them equivalent for the purposes of the rest of this comment.

This is also why I requested an example for deontology. To me, "deontology is true" sounds just like those examples. Neither "utilitarianism is true" or "deontology is true" correspond to well-formed statements or sentences or propositions or whatever the "correct" philosophical term is for this.

Comment author: nshepperd 15 February 2013 11:46:10AM 1 point [-]

but seems more commonly used on lesswrong as meaning roughly some sort of mathematical model for computing the relative values of different situations based on certain value assumptions about the elements of those situations and a thinghy called "utility function".

Wait, seriously? That sounds like a gross misuse of terminology, since "utilitarianism" is an established term in philosophy that specifically talks about maximising some external aggregative value such as "total happiness", or "total pleasure minus suffering". Utility functions are a lot more general than that (ie. need not be utilitarian, and can be selfish, for example).

Comment author: DaFranker 15 February 2013 03:16:31PM 2 points [-]

Wait, seriously? That sounds like a gross misuse of terminology, since "utilitarianism" is an established term in philosophy that specifically talks about maximising some external aggregative value such as "total happiness", or "total pleasure minus suffering".

To an untrained reader, this would seem as if you'd just repeated in different words what I said ;)

I don't see "utilitarianism" itself used all that often, to be honest. I've seen the phrase "in utilitarian fashion", usually referring more to my description than the traditional meaning you've described.

"Utility function", on the other hand, gets thrown around a lot with a very general meaning that seems to be "If there's something you'd prefer than maximizing your utility function, then that wasn't your real utility function".

I think one important source of confusion is that LWers routinely use concepts that were popularized or even invented by primary utilitarians (or so I'm guessing, since these concepts come up on the wikipedia page for utilitarianism), and then some reader assumes they're using utilitarianism as a whole in their thinking, and the discussion drifts from "utility" and "utility function" to "in utilitarian fashion" and "utility is generally applicable" to "utilitarianism is true" and "(global, single-variable-per-population) utility is the only thing of moral value in the universe!".

Comment author: whowhowho 15 February 2013 09:49:27AM *  -1 points [-]

Everywhere outside of LW , utilitarianism means a a moral theory. It, or some specific variation of it is therefore capable of being true or false. The point could have as well been made with some less mathematical moral theory. The truth or falsehood or moral theories doesn't have direct empirical consequences, and more than the truth or falsehood of abstract mathematical claims. Shut-up-and-calculate doesn't work here, because one is not using utilitarianism or any other moral theury for predictingwhat will happen, one is using to plan what one will do.

You can't say that the mathematical formulas and formalisms of utilitarianism are "true" or "false", they're just formulas. You can't say that "x = 5" is "true" or "false". It's just a formula that doesn't connect to anything, and that "x" isn't related to anything physical - I just pinpointed "x" as a variable, "5" as a number, and then declared them equivalent for the purposes of the rest of this comment.

And I can't say that f, ma and a mean something in f=ma? When you apply maths, the variables mean something. That's what application is. In U-ism, the input it happiness, or lifeyears, or soemthig, and the output is a decision that is put into practice.

This is also why I requested an example for deontology. To me, "deontology is true" sounds just like those examples. Neither "utilitarianism is true" or "deontology is true" correspond to well-formed statements or sentences or propositions or whatever the "correct" philosophical term is for this.

I don't know why you would want to say you have an explanation of morality when you are an error theorist..

I also don't know why you are an error theorist. U-ism and D-ology are rival answers to the question "what is the right way to resolve conflicts of interest?". I don't think that is a meaningless or unanswerable question. I don't see why anyone would want to pluck a formula out of the air, number-crunch using it, and then make it policy. Would you walk into a suicide booth because someone had calculated, without justifying the formula used that you were a burden to society?

Comment author: DaFranker 15 February 2013 04:36:03PM 1 point [-]

I think you are making a lot of assumptions about what I think and believe. I also think you're coming dangerously close to being perceived as a troll, at least by me.

U-ism and D-ology are rival answers to the question "what is the right way to resolve conflicts of interest?"

Oh! So that's what they're supposed to be? Good, then clearly neither - rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices.

The real question, of course, is how to put meaningful numbers into the game theory formula, how to calculate the utility of the agents, how to determine the correct utility functions for each agent.

My answer to this is that there is already a set of utility functions implemented in each humans' brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you'll end up with a reflectively coherent CEV-like ("ideal" from now on) utility function for this one human, and then that's the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.

So now what we need is better insights and more research into what these sets of utility functions look like, how close to completion they are, and how similar they are across different humans.

Note that I've never even heard of a single human capable of knowing or always acting on their "ideal utility function". All sample humans I've ever seen also have other mechanisms interfering or taking over which makes it so that they don't always act even according to their current utility set, let alone their ideal one.

I don't know why you would want to say you have an explanation of morality when you are an error theorist. (...) I also don't know why you are an error theorist.

I don't know what being an "error theorist" entails, but you claiming that I am one seems valid evidence that I am one so far, so sure. Whatever labels float your boat, as long as you aren't trying to sneak in connotations about me or committing the noncentral fallacy. (notice that I accidentally snuck in the connotation that, if you are committing this fallacy, you may be using "worst argument in the world")

And I can't say that f, ma and a mean something in f=ma? When you apply maths, the variables mean something. That's what application is. In U-ism, the input it happiness, or lifeyears, or soemthig, and the output is a decision that is put into practice.

Sure. Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?

The formulation for f=ma is that the force applied to an object is equal to the product of the object's mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally. If for some reason we ran a well-designed, controlled experiment and suddenly more massive objects started accelerating more than less massive objects with the same amount of force, or more generally the physical behavior didn't correspond to that equation, the equation would be false.

Comment author: whowhowho 15 February 2013 04:56:40PM *  0 points [-]

Oh! So that's what they're supposed to be? Good, then clearly neither - rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices

Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don't.

My answer to this is that there is already a set of utility functions implemented in each humans' brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you'll end up with a reflectively coherent CEV-like ("ideal" from now on) utility function for this one human, and then that's the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.

No. You cant leap from "a reflectively coherent CEV-like [..] utility function for this one human" to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off.

I don't know what being an "error theorist" entails,

Strictly speaking, you are a metaethical error theorist. You think there is no meaning to the truth of falsehood of metathical claims.

Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?

Any two theoris which have differing logical structure can have truth values, since they can be judged by coherence, etc, and any two theories which make differnt objectlevle predictions can likelwise have truth values. U and D pass both criteria with flying colours.

And if CEV is not a meaningul metaethical theory, why bother with it? If you can't say that the output of a grand CEV number crunch is what someone should actually do, what is the point?

The formulation for f=ma is that the force applied to an object is equal to the product of the object's mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally.

I know. And you detemine the truth factors of other theories (eg maths) non-empirically. Or you can use a mixture. How were you porposing to test CEV?

Comment author: DaFranker 15 February 2013 07:24:36PM *  1 point [-]

Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don't.

(...)

No. You cant leap from "a reflectively coherent CEV-like [..] utility function for this one human" to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off.

That is simply false.

Two individual interests: Making paperclips and saving human lives. Prisoners' dilemma between the two. Is there any sort of theory of morality that will "solve" the problem or do better than number-crunching for Pareto optimality?

Even things that cannot be quantified can be quantified. I can quantify non-quantifiable things with "1" and "0". Then I can count them. Then I can compare them: I'd rather have Unquantifiable-A than Unquantifiable-B, unless there's also Unquantifiable-C, so B < A < B+C. I can add any number of unquantifiables and/or unbreakable rules, and devise a numerical system that encodes all my comparative preferences in which higher numbers are better. Then I can use this to find numbers to put on my Prisoners Dilemma matrix or any other game-theoretic system and situation.

Relevant claim from an earlier comment of mine, reworded: There does not exist any "objective", human-independent method of comparing and trading the values within human morality functions.

Game Theory is the science of figuring out what to do in case you have different agents with incompatible utility functions. It provides solutions and formalisms both when comparisons between agents' payoffs are impossible and when they are possible. Isn't this exactly what you're looking for? All that's left is applied stuff - figuring out what exactly each individual cares about, which things all humans care about so that we can simplify some calculations, and so on. That's obviously the most time-consuming, research-intensive part, too.

Comment author: BerryPick6 15 February 2013 06:06:12PM 0 points [-]

any two theories which make differnt objectlevle predictions can likelwise have truth values.

Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true? This is another extension of the original question posed, which you've been dodging.

Comment author: whowhowho 15 February 2013 06:14:05PM *  0 points [-]

Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true?

Deontology says you should push the fat man under the trolley, and various other examples that are well known in the literature.

This is another extension of the original question posed, which you've been dodging.

I have not been "dodging" it. The question seemed to frame the issue as one of being able to predict events that are observed passively. That misses the point on several levels. For one thing, it is not the case that empirical proof is the only kind of proof. For another, no moral theory "does" anything unless you act on it. And that includes CEV.

Comment author: BerryPick6 15 February 2013 07:00:04PM 0 points [-]

Deontology says you should push the fat man under the trolley, and various other examples that are well known in the literature.

This would still be the case, even if Deonotology was false. This leads me to strongly suspect that whether or not it is true is a meaningless question. There is no test I can think of which would determine its veracity.

Comment author: BerryPick6 15 February 2013 11:46:54AM 0 points [-]

I don't know why you would want to say you have an explanation of morality when you are an error theorist.

Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values.

I also don't know why you are an error theorist. U-ism and D-ology are rival answers to the question "what is the right way to resolve conflicts of interest?".

When they are both trying to give accounts of what it would mean for something to be "right", it seems this question becomes pretty silly.

Comment author: DaFranker 15 February 2013 03:59:15PM *  1 point [-]

The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,)

I'm not sure at all what those mean. If they mean that I think there doesn't exist any sentences about morality that can have truth values, that is false. "DaFranker finds it immoral to coat children in burning napalm" is true, with more confidence than I can reasonably express (I'm about as certain of this belief about my moral system as I am in things like 2 + 2 = 4).

However, the sentence "It is immoral to coat children in burning napalm" returns an error for me.

You could say I consider the function "isMoral?" to take as input a morality function, a current worldstate, and an action to be applied to this worldstate that one wants to evaluate whether it is moral or not. A wrapper function "whichAreMoral?" exists to check more complicated scenarios with multiple possible actions and other fun things.

See, if the "morality function" input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality.

he is precisely asking you what it would mean for U or D to have truth values.

Yes.

In the example above, my "isMoral?" function can only return a truth-value when you give it inputs and run the algorithm. You can't look at the overall code defining the function and give it a truth-value. That's just completely meaningless. My current understanding of U and D is that they're fairly similar to this function.

When they are both trying to give accounts of what it would mean for something to be "right", it seems this question becomes pretty silly.

I agree somewhat. To use another code analogy, here I've stumbled upon the symbol "Right", and then I look back across the code for this discussion and I can't find any declarations or "Right = XXXXX" assignment operations. So clearly the other programmers are using different linked libraries that I don't have access to (or they forgot that "Right" doesn't have a declaration!)

Comment author: whowhowho 15 February 2013 04:43:32PM *  0 points [-]

If they mean that I think there doesn't exist any sentences about morality that can have truth values, that is false. "DaFranker finds it immoral to coat children in burning napalm" is true, with more confidence than I can reasonably express

An error theorist could agree with that. it isn't really a statement about morality, it is about belief. Consider "Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies". That isn't a true statement about harpies.

See, if the "morality function" input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality.

And it doesn't matter what the morality function is? Any mapping from input to output will do?

You can't look at the overall code defining the function and give it a truth-value. That's just completely meaningless.

So is it meaninless that

  • some simulations do (not) correctly model the simulated system

  • some commercial software does (not) fulfil a real-world business requirment

  • some algorithms do (not) correctly computer mathematical functions

  • some games are (not) entertaining

  • some trading software does (not) return a profit

I agree somewhat. To use another code analogy, here I've stumbled upon the symbol "Right", and then I look back across the code for this discussion and I can't find any declarations or "Right = XXXXX" assignment operations.

It's worth noting that natural language is highhly contextual. We know in broad terms what it means to get a theory "right". That's "right" in one context. In this context we want a "right" theory of morality, that is a theoretically-right theory of the morally-right.

Comment author: DaFranker 15 February 2013 06:49:39PM 1 point [-]

And it doesn't matter what the morality function is? Any mapping from input to output will do?

Yes.

I have a standard library in my own brain that determines what I think looks like a "good" or "useful" morality function, and I only send morality functions that I've approved into my "isMoral?" function. But "isMoral?" can take any properly-formatted function of the right type as input.

And I have no idea yet what it is that makes certain morality functions look "good" or "useful" to me. Sometimes, to try and clear things up, I try to recurse "isMoral?" on different parameters.

e.g.: "isMoral? defaultMoralFunc w1 (isMoral? newMoralFunc w1 BurnBabies)" would tell me whether my default morality function considers moral the evaluation and results of whether the new morality function considers burning babies moral or not.

An error theorist could agree with that. it isn't really a statement about morality, it is about belief. Consider "Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies". That isn't a true statement about harpies.

I'm not sure what you mean by "it isn't really a statement about morality, it is about belief."

Yes, I have the belief that I consider it immoral to coat children in napalm. This previous sentence is certainly a statement about my beliefs. "I consider it immoral to coat children in napalm" certainly sounds like a statement about my morality though.

"isMoral? DaFranker_IdealMoralFunction Universe coatChildInNapalm = False" would be a good way to put it.

It is a true statement about my ideal moral function that it considers it better not to coat a child in burning napalm. The declaration and definition of "better" here are inside the source code of DaFranker_IdealMoralFunction, and I don't have access to that source code (it's probably not even written yet).

Also note that "isMoral? MoralIntuition w a" =/= ""isMoral? [MoralFunctionsInBrain] w a" =/= ""isMoral? DominantMoralFunctionInBrain w a" =/= ""isMoral? CurrentMaxMoralFunctionInBrain w a" =/= "isMoral? IdealMoralFunction w a".

In other words, when one thinks of whether or not to coat a child in burning napalm, many functions are executed in the brain, some of them may disagree on the betterness of some details of the situation, one of those functions usually takes the lead and becomes what the person actually does when faced with that situation (this dominance is dynamically computed at runtime, so at each evaluation the result may be different if, for instance, one's moral intuitions have changed the internal power balance within the brain), but one could in theory make up a function that represents the pareto-optimal compromise of all those functions, and all of this is reviewed in very synthesized form by the conscious mind to generate a Moral Intuition. All of which are very different from what would happen if the conscious mind could read the source code for the set of moral functions in the brain and edit things to be the way it prefers recursively towards a unique ideal moral function.

So is it meaninless that

  • some simulations do (not) correctly model the simulated system

  • some commercial software does (not) fulfil a real-world business requirment

  • some algorithms do (not) correctly computer mathematical functions

  • some games are (not) entertaining

  • some trading software does (not) return a profit

Not quite, but those are different questions. Is the trading software itself "true" or "false"? No. Is my approximate model of how the trading software works "true" or "false"? No.

Is it "true" or "false" that my approximate model of how the trading software works is better than competing alternatives? Yes, it is true (or false). Is it "true" or "false" that the trading software returns a profit? Yes, it is.

See, there's an element of context that lets us ask true/false questions about things. "Politics is true" is meaningless. "Politics is the most efficient method of managing a society" is certainly not meaningless, and with more formal definitions of "efficient" and "managing" one could even produce experimental tests to determine by observations whether that is true or false.

However, when one says "utilitarianism is true", I just don't know what observations to make. "utilitarianism accurately models DaFranker's ideal moral function" is much better - I can compare the two, I can try to refine what is meant by "utilitarianism" here exactly, and I could in principle determine whether this is true or false.

"as per utilitarianism's claim, what is morally best is to maximize the sum of_ x_ where each x is a measure u() of each agent's ideal morality function" sounds like it also makes sense. But then you run into a snag while trying to evaluate the truth-value of this. What is "morally best" here? According to what principle? It seems this "morally best" depends on the reader, or myself, or some other point of reference.

We could decide that this "morally best" means that it is the optimal compromise between all of our morality functions, the optimal way to resolve conflicts of interest with the least total loss in utility and highest total gain.

We could assign a truth-value to that; compute all possible forms of social agreement about morality, all possible rule systems, and if the above utilitarian claim is among the pareto-optimal choices on the game payoff matrix, then the statement is true, if it is strictly dominated by some other outcome, then it is false. Of course, actually running this computation would require solving all kinds of problems and getting various sorts of information that I don't even know how to find ways to solve or get. And might requite a Halting Oracle or some form of hypercomputer.

At any rate, I don't think "as per utilitarianism's claim, it is pareto-optimal across all humans to maximize the sum of_ x_ where each x is a measure u() of each agent's ideal morality function" is what you meant by "utilitarianism is true".

Comment author: whowhowho 15 February 2013 12:26:25PM 0 points [-]

Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values.

By comparing them to abstract formulas, which don't have truth values ... as opposed to equations, which, do, and to applied maths,which does, and theories, which do...

When they are both trying to give accounts of what it would mean for something to be "right", it seems this question becomes pretty silly.

I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is. The question of what is right is also about the most important question there is.

Comment author: DaFranker 15 February 2013 04:01:19PM 0 points [-]

By comparing them to abstract formulas, which don't have truth values ... as opposed to equations, which, do, and to applied maths,which does, and theories, which do...

My main point is that I haven't the slightest clue as to what kind of applied math or equations U and D could possibly be equivalent to. That's why I was asking you, since you seem to know.

Comment author: whowhowho 15 February 2013 04:50:47PM 0 points [-]

I am not assuming they have to be implemented mathematically. And I thought you problem is that you didn;t have a procedure for identifying corect theories of morality?

Comment author: BerryPick6 15 February 2013 12:39:30PM *  0 points [-]

By comparing them to abstract formulas, which don't have truth values ... as opposed to equations, which, do, and to applied maths,which does, and theories, which do...

I'll concede I may have misinterpreted them. I guess we shall wait and see what DF has to say about this.

I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is. The question of what is right is also about the most important question there is.

I never said belief in "objective morality" was silly. I said that trying to decide whether to use U or D by asking "which one of these is the right way to resolve conflicts of interest?" when accepting one or the other necessarily changes variables in what you mean by the word 'right' and also, maybe even, the word 'resolve', sounds silly.

Comment author: whowhowho 15 February 2013 12:51:00PM *  0 points [-]

I said that trying to decide whether to use U or D by asking "which one of these is the right way to resolve conflicts of interest?" when accepting one or the other necessarily changes variables in what you mean by the word 'right' and also, maybe even, the word 'resolve', sounds silly.

That woudl be the case of "right way" meant "morally-right way". But metaethical theories aren't compared by object-level moral rightness, exactly. They can be compared by coherence, practicallity, etc. If metaethics were just obviously unsolveable, someone would have noticed.

Comment author: BerryPick6 15 February 2013 12:57:44PM 0 points [-]

That woudl be the case of "right way" meant "morally-right way".

That's just how I understand that word. 'Right for me to do' and 'moral for me to do' refer to the same things, to me. What differs in your understanding of the terms?

If metaethics were just obviously unsolveable, someone would have noticed.

Remind me what it would look like for metaethics to be solved?

Comment author: BerryPick6 15 February 2013 12:50:28AM 0 points [-]

What would it be like if utilitarianism is true?

I think you've just repeated his question.