Derek Parfit, "On What Matters"
Derek Parfit has published his second book, "On What Matters". Here are reviews by Tyler Cowen and Peter Singer.
Derek Parfit has published his second book, "On What Matters". Here are reviews by Tyler Cowen and Peter Singer.
Comments (17)
I haven't read "On What Matters". Tyler Cowen's review is so thoroughly plausible and unflattering that I probably won't. Peter Singer's, not so convincing.
Based solely on the reviews, I'm not impressed. The search for the One True something or other goes against the typically LessWrongian idea that human meaning gets assigned to things by computational processes inside human heads. Once you bear this in mind and ask "what does 'ought' mean?", you see that a fully general mind can "ought" to do whatever it damn well pleases. Also relevant.
Optimal solutions are contained within the optimization criteria.
Language is for communication: language is public. The correctness of a meaning comes from a language community, and only from there (dictionaries are a staging post; either they reflect the language communities usages or they are wrong). The proximate assignment of meanings to words within brains is likewise either in line with usage or wrong. You need a brain to understand a word, but a brain cannot grant correctness to any arbitrary meaning-word assignment,
So you ought not do as you damn well please. Especially if you enjoy serial killing. <eta> The "meaning theory -- the idea that disagreements about morality are disagreements about the meanings of "should", "ought" and "good" -- is put forward to explain a fact about disaggreements. The "theory theory" is an alternative explanation.
It is not the case that a disagreement about how to apply a word must be a disagreement about its meaning. In fact, disagreement about a implies common ground -- otherwise it is a case of two people talking past each other.
It is not the case that understanding the dictionary meaning of a word, at the level of ordinary linguistic competence, gives competence in applying it. I understand the meaning of the word "cantonese", but I could not distinguish it from mandarin. In many areas, it requires specalised knowledge to apply a word.
So, on the theory-theory, there must be a meaning of good/should that anyone can produce. Since there is no disagreement about the basic meaning, we would expect it sound obvious, a truism. I propose that the truisms in question are something like:
good acts are praiseworthy, bad acts blameworthy.
People who have a theory of X can make assignements, and can explain their rationale for doing so. People who do not have a theory of morality, most people, may or may not be able to make asignments intuiitively, but will not be able to explain their rational.
Theists (and philosophers) can answer moral questions because they have a theory, not because they are aware of a some meaning that is denied to other English speakers. </eta>
And what Parfitt is offering, and what many people can't, is a theory of "ought", not a definition.
The "meaning theory" isn't quite what I"m getting at - the best name would probably be the "algorithm theory." It goes like this: there is some algorithm that determines whether an agent thinks X ought to happen - which we, as humans communicating, can agree means something or other general about the agent's moral thoughts or decision-making algorithm. This algorithm for sorting "ought" is like a definition - a definition is something we can use to sort objects. But it's not like any old definition - this definition cannot be guaranteed to be smaller than the brain of the agent.
Or to put it another way, by "definition" I mean the full specification of a cloud in idea-space.
So even when it's possible to use "ought" in the same general way to refer to some part of an agent, this only refers to the fuller algorithm inside an agent. And though these simple definitions are useful for communication, inside peoples' heads there is not a simple pointer like "ought not -> blameworthy" - there is a complicated bunch of neurons that takes in sensory information and outputs moral decisions. There is no reason why this complicated bunch of neurons should be exactly identical for each person.
Anyhow, back to some sort of topic:
You seem to be saying that Parfit is not claiming his theory as any sort of One True theory. Is this accurate? The reviews implied it, but maybe I read wrong.
Substituting "definition" for "meaning" isn't going to make much difference.
Or to put it another way, by "definition" I mean the full specification of a cloud in idea-space.
No. But the correct way to handle that theory is to say that different people have different theories/intuitions. Otherwise you fall into the trap of saying there are no real disagreements about morality, or that serial killer morality is perfectly valid because serial can make up their own meaning definition of "moral".
Surely anyone who argues for a theory is saying that.
I dunno, you could just write down your theory to get it out there, maybe to convince other humans (which is possible, us being imperfect) as a means to spreading your morality.
Talking about "validity" just seems to be a way to disparage any morality/theory/set of intuitions that's not your own. From a general level, anything that fills the cognitive role we talked about as a definition, assigning things something like blameworthiness, counts. And yes, that means the serial-killer morality too.
The way to avoid "dead-end relativism" - e.g. not stopping serial killers even though you think it's bad - is to be comfortable with being an agent with a morality the same way a carefully-built AI could be an agent with a morality. It doesn't actually matter that your morality could have been something else. It is what it is, and so it's true that when I say "right" I'm referring to Manfred::right, some specific algorithm, and I'll still stop serial killers because it's the right thing to do.
We're back to trouble with words again. Like the tree falling in the forest making a sound, "right" can mean different things to different people, and the way to solve the problem is not to argue over who's "right" is right, but use more words to just care about the actual state of the universe. So I'll stop a serial killer, but I won't argue with him about whether what he's doing is right. Well, I guess that's an oversimplification - humans are persuadable about the darndest things, so arguing about "right" is sometimes fruitful. But if it the argument goes nowhere, I'm comfortable with him doing Killer::right, and me doing Manfred::right, and then I'll hit him with a big stick.
You can promote metaethical objectivism without having and particular first order moral theory in mind; and you can hold that the Meaning Theory is a poor argument for subjectivism without holding objectivism to be true.
Not equally. Not without some hefty question begging. Anything that assigns solutions to numeric problems could be called arithmetic, but some assignments are true and others false.
Counts as correct?
Unless you are one.
I don't find it satisfactory to be compelled to stop things--to treat them as if they are wrong--without knowing why, or even that, they are wrong. I like reasons. i guess you could call me a rationalist.
I've just argued against that. This is going in circles.
I think a universe where force is minimised in favour of persuasion is preferable.
What if you are really wrong? What if you are the guy who is rounding the slave owners "property" and dutifully returning them to him?
Didn't you just agree that the algorithm for sorting things into "right" and "not right" is different in different in different people? Are we really going to have to taboo "means" now?
Then I'm wrong about some fact that I used in translating my morality into actions, e.g. skin color determines intelligence.
Hmm. Actually, it looks like things get complicated here because of human mutability - we can be persuaded of either a thing or its opposite in different conditions. So I really do have to stick with morality as the algorithm itself and not some run of it if I want consistency (though that's not strictly necessary).
Yes, and I also argued, repeatedly, against saying that such an algorithm constitutes either a definition or a meaning.
Not necessarily. You could be wrong about morality itself. You could think property rights are more important than liberty, or that people are means not ends.
Those are not your only choices.
What sort of impact would being right or wrong about morality have that I could notice? For example, let's say someone thinks taxation is inherently morally wrong. What sort of observations are ruled out by this belief, such that making those observations would falsify the belief?
The questions is what you should care about.
Is it rational to care more about being able to predict accurately than care about inadvertantly doing evil?