whowhowho comments on A Sketch of an Anti-Realist Metaethics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (136)
..and its a counsel of despair when it comes to getting people to ultimately agree about morality. Which, for those who think that is a vital feature, would incline them to regard it as an error theory.
Have you ever heard of Game Theory?
Because I don't see why this counsel of despair couldn't crack down some math and figure out Pareto-optimal moral rules or laws or agreements, and run with those. If they know enough about their own moralities to be a "counsel of despair", they should know enough to put down rough estimates and start shutting up.
That presupposes something like utilitariansim. If something like deontology is true, then number-crunched solutions could involve unjustifiable violations of rights.
Could you humor me for an example? What would the universe look like if "deontology is true" versus a universe where "deontology is false"? Where is the distinction?
I don't see how a deontological system would prevent number-crunching. You just number-crunch for a different target: find the pareto optima that minimize the amount of rule-breaking and/or the importance of the rules broken.
What would it be like if utilitarianism is true? Or the axiom of choice? Or the continuum hypothesis?
I don't see how a description of the neurology of moral reasoning tells you how to crunch the numbers -- which decision theory you need to use to implement which moral theory to resolve conflicts in the right way.
This statement seems meaningless to me. As in "Utilitarianism is true" computes in my mind the exact same way as "Politics is true" or "Eggs are true".
The term "utilitarianism" encompasses a broad range of philosophies, but seems more commonly used on lesswrong as meaning roughly some sort of mathematical model for computing the relative values of different situations based on certain value assumptions about the elements of those situations and a thinghy called "utility function".
If this latter meaning is used, "utilitarianism is true" is a complete type error, just like "Blue is true" or "Eggs are loud". You can't say that the mathematical formulas and formalisms of utilitarianism are "true" or "false", they're just formulas. You can't say that "x = 5" is "true" or "false". It's just a formula that doesn't connect to anything, and that "x" isn't related to anything physical - I just pinpointed "x" as a variable, "5" as a number, and then declared them equivalent for the purposes of the rest of this comment.
This is also why I requested an example for deontology. To me, "deontology is true" sounds just like those examples. Neither "utilitarianism is true" or "deontology is true" correspond to well-formed statements or sentences or propositions or whatever the "correct" philosophical term is for this.
Wait, seriously? That sounds like a gross misuse of terminology, since "utilitarianism" is an established term in philosophy that specifically talks about maximising some external aggregative value such as "total happiness", or "total pleasure minus suffering". Utility functions are a lot more general than that (ie. need not be utilitarian, and can be selfish, for example).
To an untrained reader, this would seem as if you'd just repeated in different words what I said ;)
I don't see "utilitarianism" itself used all that often, to be honest. I've seen the phrase "in utilitarian fashion", usually referring more to my description than the traditional meaning you've described.
"Utility function", on the other hand, gets thrown around a lot with a very general meaning that seems to be "If there's something you'd prefer than maximizing your utility function, then that wasn't your real utility function".
I think one important source of confusion is that LWers routinely use concepts that were popularized or even invented by primary utilitarians (or so I'm guessing, since these concepts come up on the wikipedia page for utilitarianism), and then some reader assumes they're using utilitarianism as a whole in their thinking, and the discussion drifts from "utility" and "utility function" to "in utilitarian fashion" and "utility is generally applicable" to "utilitarianism is true" and "(global, single-variable-per-population) utility is the only thing of moral value in the universe!".
Everywhere outside of LW , utilitarianism means a a moral theory. It, or some specific variation of it is therefore capable of being true or false. The point could have as well been made with some less mathematical moral theory. The truth or falsehood or moral theories doesn't have direct empirical consequences, and more than the truth or falsehood of abstract mathematical claims. Shut-up-and-calculate doesn't work here, because one is not using utilitarianism or any other moral theury for predictingwhat will happen, one is using to plan what one will do.
And I can't say that f, ma and a mean something in f=ma? When you apply maths, the variables mean something. That's what application is. In U-ism, the input it happiness, or lifeyears, or soemthig, and the output is a decision that is put into practice.
I don't know why you would want to say you have an explanation of morality when you are an error theorist..
I also don't know why you are an error theorist. U-ism and D-ology are rival answers to the question "what is the right way to resolve conflicts of interest?". I don't think that is a meaningless or unanswerable question. I don't see why anyone would want to pluck a formula out of the air, number-crunch using it, and then make it policy. Would you walk into a suicide booth because someone had calculated, without justifying the formula used that you were a burden to society?
I think you are making a lot of assumptions about what I think and believe. I also think you're coming dangerously close to being perceived as a troll, at least by me.
Oh! So that's what they're supposed to be? Good, then clearly neither - rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices.
The real question, of course, is how to put meaningful numbers into the game theory formula, how to calculate the utility of the agents, how to determine the correct utility functions for each agent.
My answer to this is that there is already a set of utility functions implemented in each humans' brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you'll end up with a reflectively coherent CEV-like ("ideal" from now on) utility function for this one human, and then that's the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.
So now what we need is better insights and more research into what these sets of utility functions look like, how close to completion they are, and how similar they are across different humans.
Note that I've never even heard of a single human capable of knowing or always acting on their "ideal utility function". All sample humans I've ever seen also have other mechanisms interfering or taking over which makes it so that they don't always act even according to their current utility set, let alone their ideal one.
I don't know what being an "error theorist" entails, but you claiming that I am one seems valid evidence that I am one so far, so sure. Whatever labels float your boat, as long as you aren't trying to sneak in connotations about me or committing the noncentral fallacy. (notice that I accidentally snuck in the connotation that, if you are committing this fallacy, you may be using "worst argument in the world")
Sure. Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?
The formulation for f=ma is that the force applied to an object is equal to the product of the object's mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally. If for some reason we ran a well-designed, controlled experiment and suddenly more massive objects started accelerating more than less massive objects with the same amount of force, or more generally the physical behavior didn't correspond to that equation, the equation would be false.
Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don't.
No. You cant leap from "a reflectively coherent CEV-like [..] utility function for this one human" to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off.
Strictly speaking, you are a metaethical error theorist. You think there is no meaning to the truth of falsehood of metathical claims.
Any two theoris which have differing logical structure can have truth values, since they can be judged by coherence, etc, and any two theories which make differnt objectlevle predictions can likelwise have truth values. U and D pass both criteria with flying colours.
And if CEV is not a meaningul metaethical theory, why bother with it? If you can't say that the output of a grand CEV number crunch is what someone should actually do, what is the point?
I know. And you detemine the truth factors of other theories (eg maths) non-empirically. Or you can use a mixture. How were you porposing to test CEV?
That is simply false.
Two individual interests: Making paperclips and saving human lives. Prisoners' dilemma between the two. Is there any sort of theory of morality that will "solve" the problem or do better than number-crunching for Pareto optimality?
Even things that cannot be quantified can be quantified. I can quantify non-quantifiable things with "1" and "0". Then I can count them. Then I can compare them: I'd rather have Unquantifiable-A than Unquantifiable-B, unless there's also Unquantifiable-C, so B < A < B+C. I can add any number of unquantifiables and/or unbreakable rules, and devise a numerical system that encodes all my comparative preferences in which higher numbers are better. Then I can use this to find numbers to put on my Prisoners Dilemma matrix or any other game-theoretic system and situation.
Relevant claim from an earlier comment of mine, reworded: There does not exist any "objective", human-independent method of comparing and trading the values within human morality functions.
Game Theory is the science of figuring out what to do in case you have different agents with incompatible utility functions. It provides solutions and formalisms both when comparisons between agents' payoffs are impossible and when they are possible. Isn't this exactly what you're looking for? All that's left is applied stuff - figuring out what exactly each individual cares about, which things all humans care about so that we can simplify some calculations, and so on. That's obviously the most time-consuming, research-intensive part, too.
Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true? This is another extension of the original question posed, which you've been dodging.
Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values.
When they are both trying to give accounts of what it would mean for something to be "right", it seems this question becomes pretty silly.
I'm not sure at all what those mean. If they mean that I think there doesn't exist any sentences about morality that can have truth values, that is false. "DaFranker finds it immoral to coat children in burning napalm" is true, with more confidence than I can reasonably express (I'm about as certain of this belief about my moral system as I am in things like 2 + 2 = 4).
However, the sentence "It is immoral to coat children in burning napalm" returns an error for me.
You could say I consider the function "isMoral?" to take as input a morality function, a current worldstate, and an action to be applied to this worldstate that one wants to evaluate whether it is moral or not. A wrapper function "whichAreMoral?" exists to check more complicated scenarios with multiple possible actions and other fun things.
See, if the "morality function" input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality.
Yes.
In the example above, my "isMoral?" function can only return a truth-value when you give it inputs and run the algorithm. You can't look at the overall code defining the function and give it a truth-value. That's just completely meaningless. My current understanding of U and D is that they're fairly similar to this function.
I agree somewhat. To use another code analogy, here I've stumbled upon the symbol "Right", and then I look back across the code for this discussion and I can't find any declarations or "Right = XXXXX" assignment operations. So clearly the other programmers are using different linked libraries that I don't have access to (or they forgot that "Right" doesn't have a declaration!)
An error theorist could agree with that. it isn't really a statement about morality, it is about belief. Consider "Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies". That isn't a true statement about harpies.
And it doesn't matter what the morality function is? Any mapping from input to output will do?
So is it meaninless that
some simulations do (not) correctly model the simulated system
some commercial software does (not) fulfil a real-world business requirment
some algorithms do (not) correctly computer mathematical functions
some games are (not) entertaining
some trading software does (not) return a profit
It's worth noting that natural language is highhly contextual. We know in broad terms what it means to get a theory "right". That's "right" in one context. In this context we want a "right" theory of morality, that is a theoretically-right theory of the morally-right.
By comparing them to abstract formulas, which don't have truth values ... as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is. The question of what is right is also about the most important question there is.
My main point is that I haven't the slightest clue as to what kind of applied math or equations U and D could possibly be equivalent to. That's why I was asking you, since you seem to know.
I'll concede I may have misinterpreted them. I guess we shall wait and see what DF has to say about this.
I never said belief in "objective morality" was silly. I said that trying to decide whether to use U or D by asking "which one of these is the right way to resolve conflicts of interest?" when accepting one or the other necessarily changes variables in what you mean by the word 'right' and also, maybe even, the word 'resolve', sounds silly.
I think you've just repeated his question.