DaFranker comments on A Sketch of an Anti-Realist Metaethics - Less Wrong

16 Post author: Jack 22 August 2011 05:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (136)

You are viewing a single comment's thread. Show more comments above.

Comment author: whowhowho 15 February 2013 04:56:40PM *  0 points [-]

Oh! So that's what they're supposed to be? Good, then clearly neither - rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices

Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don't.

My answer to this is that there is already a set of utility functions implemented in each humans' brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you'll end up with a reflectively coherent CEV-like ("ideal" from now on) utility function for this one human, and then that's the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.

No. You cant leap from "a reflectively coherent CEV-like [..] utility function for this one human" to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off.

I don't know what being an "error theorist" entails,

Strictly speaking, you are a metaethical error theorist. You think there is no meaning to the truth of falsehood of metathical claims.

Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?

Any two theoris which have differing logical structure can have truth values, since they can be judged by coherence, etc, and any two theories which make differnt objectlevle predictions can likelwise have truth values. U and D pass both criteria with flying colours.

And if CEV is not a meaningul metaethical theory, why bother with it? If you can't say that the output of a grand CEV number crunch is what someone should actually do, what is the point?

The formulation for f=ma is that the force applied to an object is equal to the product of the object's mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally.

I know. And you detemine the truth factors of other theories (eg maths) non-empirically. Or you can use a mixture. How were you porposing to test CEV?

Comment author: DaFranker 15 February 2013 07:24:36PM *  1 point [-]

Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don't.

(...)

No. You cant leap from "a reflectively coherent CEV-like [..] utility function for this one human" to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off.

That is simply false.

Two individual interests: Making paperclips and saving human lives. Prisoners' dilemma between the two. Is there any sort of theory of morality that will "solve" the problem or do better than number-crunching for Pareto optimality?

Even things that cannot be quantified can be quantified. I can quantify non-quantifiable things with "1" and "0". Then I can count them. Then I can compare them: I'd rather have Unquantifiable-A than Unquantifiable-B, unless there's also Unquantifiable-C, so B < A < B+C. I can add any number of unquantifiables and/or unbreakable rules, and devise a numerical system that encodes all my comparative preferences in which higher numbers are better. Then I can use this to find numbers to put on my Prisoners Dilemma matrix or any other game-theoretic system and situation.

Relevant claim from an earlier comment of mine, reworded: There does not exist any "objective", human-independent method of comparing and trading the values within human morality functions.

Game Theory is the science of figuring out what to do in case you have different agents with incompatible utility functions. It provides solutions and formalisms both when comparisons between agents' payoffs are impossible and when they are possible. Isn't this exactly what you're looking for? All that's left is applied stuff - figuring out what exactly each individual cares about, which things all humans care about so that we can simplify some calculations, and so on. That's obviously the most time-consuming, research-intensive part, too.