Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

'Is' and 'Ought' and Rationality

2 Post author: BobTheBob 05 July 2011 03:53AM

On the face of it, there is a tension in adhering both to the idea that there are facts about what it's rational for people to do and to the idea that natural or scientific facts are all the facts there are. The aim of this post is just to try to make clear why this should be so, and hopefully to get feedback on what people think of the tension.

In short

To a first approximation, a belief is rational just in case you ought to hold it; an action rational just in case you ought to take it. A person is rational to the extent that she believes and does what she ought to. Being rational, it is fair to say, is a normative or prescriptive property, as opposed to a merely descriptive one. Natural science, on the other hand, is concerned merely with descriptive properties of things -what they weigh, how they are composed, how they move, and so on. On the face of it, being rational is not the sort of property about which we can theorize scientifically (that is, in the vocabulary of the natural sciences). To put the point another way, rationality concerns what a thing (agent) ought to do, natural science concerns only what it is and will do, and one cannot deduce 'ought' from 'is'.

At greater length

There are at least two is/ought problems, or maybe two ways of thinking about the is/ought problem. The first problem (or way of thinking about the one problem) is posed from a subjective point of view. I am aware that things are a certain way, and that I am disposed to take some course of action, but neither of these things implies that I ought to take any course of action -neither, that is, implies that taking a given course of action would in any sense be right. How do I justify the thought that any given action is the one I ought to take? Or, taking the thought one step further, how, attending only to my own thoughts, do I differentiate merely being inclined to do something from being bound by some kind of rule or principle or norm, to do something?

This is an interesting question -one which gets to the very core of the concept of being justified, and hence of being rational (rational beliefs being justified beliefs). But it isn't the problem of interest here.

The second problem, the problem of interest, is evident from a purely objective, scientific point of view. Consider a lowly rock. By empirical investigation, we can learn its mass, its density, its mineralogical composition, and any number of other properties. Now, left to their own devices, rocks don't do much of anything, comparatively speaking, so it isn't surprising that we don't expect there to be anything it ought to do. In any case, natural science does not imply there is anything it ought to do, I think most will agree.

Consider then a virus particle - a complex of RNA and ancillary molecules. Natural science can tell us how it wiil behave in various circumstances -whether and how it will replicate itself, and so on- but once again surely there is nothing in biochemistry, genetics or other science which implies there is anything our very particle ought to do. It's true that we may think of it as having the goal to replicate itself, and consider it to have made a mistake if it replicates itself inaccurately, but these conceptions do not issue from science. Any sense in which it ought to do something, or is wrong or mistaken in acting in a given way, is surely purely metaphorical (no?).

How about a bacterium? It's orders of magnitude more complicated, but I don't see that matters are any different as regards what it ought to do. Science has nothing to tell us about what if anything is important to a bacterium, as distinct from what it will tend to do.

Moving up the evolutionary ladder, does the introduction of nervous systems make any difference? What do we think about, say, nematodes or even horseshoe crabs? The feedback mechanisms underlying the self-regulatory processes in such animals may be leaps and bounds more sophisticated than in their non-neural forebears, but it's far from clear how such increasing complexity could introduce goals.

To cut to the chase, how can matters be any different with the members of Homo sapiens ? Looked at from a properly scientific point of view, is there any scope for the attribution of purposes or goals or the appraisal of our behaviour in any sense as right or wrong? I submit that a mere increase in complexity -even if by many orders of magnitude- does not turn the trick. To be clear, I'm not claiming there are no such facts -far from it- just that these facts cannot be articulated in the language of purely natural science.

The upshot

The foregoing thoughts are hardly original. David Hume is famous for having observed that ought cannot be derived from is:

In every system of morality, which I have hitherto met with, I have always remark'd, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surpriz'd to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it shou'd be observ'd and explain'd; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention wou'd subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relation of objects, nor is perceiv'd by reason. -David Hume, A Treatise of Human Nature (London: Penguin, 1984) p. 521.

(the issue, together with this quote are touched on with a different point of view in this post of lukeprog's). I think they need facing up to. I see three options:

Option (1): Accept the basic point, stick resolutely to naturalism, and deny that there are any facts as to what it is rational for any given member of Homo sapiens to do. In other words, become an eliminativist about all normative concepts. Paul Churchland, I understand, is an example of an exponent of this position (or something more nuanced along these lines).

Option (2): Reject the argument above on one or another ground. Try somehow to shoehorn normative facts into a naturalistic world-view, at the possible peril of the coherence of that world-view. I acknowledge that this can be a valiant undertaking for those whose commitments suggest it. Any who embark on it should be aware that there is at least a half a century's worth of beleaguered efforts to put this Humpty together -it is not an easy task. One might want to start with the likes of Ruth Garrett Millikan or Jerry Fodor, then their various critics.

Option (3): Accept the argument above and reconcile yourself to the existence of mutually incommensurable but indispensable understandings of yourself. Be happy.

One response

This is already ponderously long, but there is one response worth anticipating, namely, that oughts can be inferred from wants (or preferences or utility functions), and that wants (preferences, utility functions) are naturalistic. The problem here is with the second conjunct - wants (etc.) are not naturalistic. At least, not obviously so (and, incidentally, the same fate befalls beliefs). My explanation is as follows.

An example of the thinking behind this proposal presumably would be something like,

P) X's wanting that X eats an apple entails, other things being equal, that X ought to eat an apple.

The force of naturalism or physicalism in this context is presumably a commitment to some empirically testable analysis concerning wants comparable to "Water is H2O", e.g.

  • to want that one eats an apple = to be in brain state ABC

or

  • to want that one eats an apple = to be composed in part of a some structure (brain or other) which implements a Turing machine which is in computational state DEF

or ...

Now, if both of these thoughts (the thought about wants entailing oughts and the thought about there being an empirically testable analysis) are correct, then it should be possible to substitute the analysis into (P):

P') That X's brain is in state ABC entails, other things being equal, that X ought to eat an apple.

or

P'') That X is composed in part of some structure which implements a Turing machine which is in computational state DEF entails, other things being equal, that X ought to eat an apple.

or... (your favourite theory here)

I submit that neither P' nor P'' is at all plausible, for the reasons reviewed above. A thing's merely being in a certain physical or computational state does not imbue it with purpose. Such facts do not entail that there is anything which matters to it, or which makes anything right or wrong for the thing. Concerning P'', note that although computers are sometimes thought of as having purposes in virtue of our having designed them (the computer 'makes a mistake' when it calculates an incorrect value), there is not normally thought to be any sense in which they have intrinsic purposes independent of ours, as the view under scrutiny would require of them.

There are all kinds of possible refinements of P' and P'' (P' and P'' are commonly viewed as non-starters anyway, owing to the plausibility of so-called semantic externalism which they ignore). My question is whether any more refined option shows a way to defeat the objection being raised here.

Wants do indeed imply oughts. Since there plausibly is no physical or computational state being in which implies there is anything one ought to do, wanting is not identical to being in a physical or computational state.

Comments (70)

Comment author: Wei_Dai 05 July 2011 06:51:01AM 3 points [-]

The problem here is with the second conjunct - wants (etc.) are not naturalistic.

Suppose I hear Bob say "I want to eat an apple." Am I justified in assigning a higher probability to "Bob wants to eat an apple" after I hear this than before (assuming I don't have some other evidence to the contrary, like someone is holding a gun to Bob's head)? Assuming the answer is yes, and given that "Bob said 'I want to eat an apple.'" is a naturalistic fact, how did I learn something non-naturalistic from that?

Comment author: BobTheBob 05 July 2011 10:14:58PM 1 point [-]

Suppose I hear Bob say "I want to eat an apple." Am I justified in assigning a higher probability to "Bob wants to eat an apple" after I hear this than before (assuming I don't have some other evidence to the contrary, like someone is holding a gun to Bob's head)?

I think this question hits the nail on the head. You are justified in assigning a higher probability to "Bob wants to eat an apple" just in case you are already justified in taking Bob to be a rational agent (other things being equal...). If Bob isn't at least minimally rational, you can't even get so far as construing his words as English, let alone to trust that his intent in uttering them is to convey that he wants to eat an apple (think about assessing a wannabe AI chatbot, here). But in taking Bob to be rational, you are already taking him to have preferences and beliefs, and for there to be things which he ought or ought not to do. In other words, you have already crossed beyond what mere natural science provides for. This, anyway, is what I'm trying to argue.

Comment author: Wei_Dai 05 July 2011 10:53:22PM 4 points [-]

I think I kind of see what you're getting at. In order to recognize that Bob is rational, I have to have some way of knowing the properties of rationality, and the way we learn such properties does not seem to resemble the methods of the empirical sciences, like physics or chemistry.

But to me it does seems to bear some resemblance to the methods of mathematics. For example in number theory we try to capture some of our intuitions about "natural numbers" in a set of axioms, which then allows us to derive other properties of natural numbers. In the study of rationality we have for example Von Neumann–Morgenstern axioms. Although there is much more disagreement about what an appropriate set of axioms might be where rationality is concerned, the basic methodology still seems similar. Do you agree?

Comment author: BobTheBob 06 July 2011 04:30:01PM 0 points [-]

I appreciate this clarification. The point is meant to be about, as you say, the empirical sciences. I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour - just that these theories aren't straight-forwardly continuous with the theories of natural science.

In case you haven't encountered it and might be interested, the underdetermination problem associated with inferring to beliefs and desires from mere behaviour has been considered in some depth by Donald Davidson, eg in his Essays on Actions and Events

Comment author: Wei_Dai 06 July 2011 10:34:20PM 1 point [-]

I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour - just that these theories aren't straight-forwardly continuous with the theories of natural science.

In your OP, you wrote that you found statements like

That X's brain is in state ABC entails, other things being equal, that X ought to eat an apple.

implausible.

It seems quite possible to me that, perhaps through some method other than the methods of the empirical sciences (for example, through philosophical inquiry), we can determine that among the properties of "want" is that "want to eat an apple" correctly describes the brain state ABC or computational state DEF (or something of that nature). Do you still consider that statement implausible?

Comment author: BobTheBob 08 July 2011 03:14:28AM *  0 points [-]

This seems reasonable, but I have to ask about "correctly describes". The statement

"want to eat an apple" implies being in brain state ABC or computational state DEF (or something of that nature)

is plausible to me. I think the reverse implication, though raises a problem:

being in brain state ABC or computational state DEF (or something of that nature) implies "want to eat an apple"

But maybe neither of these is what you have in mind?

Comment author: Wei_Dai 08 July 2011 03:51:41AM 0 points [-]

I think I mean the latter. What problem do you see with it?

Comment author: Peterdjones 08 July 2011 12:42:34PM 1 point [-]

I don't see the problem with the latter either.

Comment author: BobTheBob 08 July 2011 03:14:21PM 0 points [-]

I do accept that 'wants' imply 'oughts'. It's an oversimplification, but the thought is that statements such as

  • X's wanting that X eat an apple implies (many other things being equal) that X ought to eat an apple.

are intuitively plausible. If wanting carries no implications for what one ought to do, I don't see how motivation can get off the ground.

Now, if we have

1) wanting that P implies one ought to do Q,

and

2) being in physical state ABC implies wanting that P

then, by transitivity of implication, we get

3) being in physical state ABC implies one ought to do Q

And this is just the kind of implication I'm trying to show is problematic.

Comment author: Wei_Dai 08 July 2011 06:36:05PM 0 points [-]

Would it be fair to say that your position is that there could be two physically identical brains, and one of them wants to eat an apple but the other doesn't, or perhaps that one of them is rational but the other isn't. In other words that preference-zombies or rationality-zombies could exist?

(In case it's not clear why I'm saying this, this is what accepting

"want to eat an apple" implies being in brain state ABC or computational state DEF (or something of that nature)

while denying

being in brain state ABC or computational state DEF (or something of that nature) implies "want to eat an apple"

would imply.)

Comment author: BobTheBob 11 July 2011 03:53:36PM *  0 points [-]

I think your question again gets right to the nub of the matter. I have no snappy answer to the challenge -here is my long-winded response.

The zombie analogy is a good one. I understand it's meant just as an analogy -the intent is not to fall into the qualia quagmire. The thought is that from a purely naturalistic perspective, people can only properly be seen as, as you put it, preference- or rationality-zombies.

The issue here is the validity of identity claims of the form,

  • Wanting that P = being in brain state ABC

My answer is to compare them to the fate of identity claims relating to sensations (qualia again), such as

  • Having sensation S (eg, being in pain) = being in brain state DEF

Suppose being in pain is found empirically always to correlate to being in brain state DEF, and the identity is proposed. Qualiaphiles will object, saying that this identity misses what's crucial to pain, viz, how it feels. The qualiaphile's thought can be defended by considering the logic of identity claims generally (this adapted from Saul Kripke's Naming and Necessity).

Scientific identity claims are necessary - if water = H2O in this world, then water = H2O in all possible worlds. That is, because water is a natural kind, whatever it is, it couldn't have been anything else. It is possible for water to present itself to us in a different phenomenal aspect ('ice9'!), but this is OK because what's essential to water is its underlying structure, not its phenomenal properties. The situation is different for pain - what's essential to pain is its phenomenal properties. Because pain essentially feels like this (so the story goes), it's correlation with being in brain state DEF can only be contingent. Since identities of this kind, if true, are by their natures necessary, the identity is false.

There is a further step (lots of steps, I admit) to rationality. The thought is that our access to people's rationality is 'direct' in the way our access to pain is. The unmediated judgement of rationality would, if push were to come to shove, trump the scientifically informed, indirect inference from brain states. Defending this proposition would take some doing, but the idea is that we need to understand each other as rational agents before we can get as far as dissecting ourselves to understand ourselves as mere objects.

Comment author: torekp 06 July 2011 10:50:24PM *  0 points [-]

There are underdetermination problems all over the philosophy of science. I don't see how this poses a special problem for norms, or rationality. When two domains of science are integrated, it is often via proposed bridge laws that may not provide an exactly intuitive match. For example, some of the gases that have a high "temperature" when that is defined as mean kinetic energy, might feel somewhat colder than some others with lower "temperature". But we accept the reduction provided it succeeds well enough.

If there are no perfect conceptual matches by definitions of a to-be-reduced term in the terms of the reducing domain, that is not fatal. If we can't find one now, that is even less so.

Comment author: BobTheBob 08 July 2011 04:06:36AM *  0 points [-]

I agree that underdetermination problems are distinct from problems about norms -from the is/ought problem. Apologies if I introduced confusion in mentioning them. They are relevant because they arise (roughly speaking) at the interface between decision theory and empirical science, ie, where you try to map mere behaviours onto desires and beliefs.

My understanding is that in philosophy of science, an underdetermination problem arises when all evidence is consistent with more than one theory or explanation. You have a scientist, a set of facts, and more than one theory which the scientist can fit to the facts. In answer to your initial challenge, the problem is different for human psychology because the underdetermination is not of the scientist's theory but supposedly of one set of facts (facts about beliefs and desires) by another (behaviour and all cognitive states of the agent). That is, in contrast to the basic case, here you have a scientist, one set of facts -about a person's behaviour and cognitive states- a second set of suppposed facts -about the person's beliefs and desires- and the problem is that the former set underdetermine the latter.

Comment author: torekp 12 July 2011 01:21:12AM 0 points [-]

You seem to be introducing a fact/theory dichotomy. That doesn't seem promising.

If we look at successful reductions in the sciences, they can make at least some of our underdetermination problems disappear. Mean kinetic energy of a gas is a more precise notion than "temperature" was in prior centuries, for example. I wouldn't try to wrestle with the concepts of cognitive states and behaviors to resolve underdetermination. Instead, it seems worthwhile to propose candidate bridge laws and see where they get us. I think that Millikan et al may well be onto something.

Comment author: BobTheBob 12 July 2011 01:26:57PM 0 points [-]

As I understand it, the problem of scientific underdetermination can only be formulated if we make some kind of fact/theory distinction - observation/theory would be better, is that ok with you?

I'm not actually seeing how the temperature example is an instance of underdetermination, and I'm a little fuzzy on where bridge laws fit in, but would be open to clarification on these things.

Comment author: torekp 12 July 2011 11:36:41PM 0 points [-]

Well, scientific underdetermination problems can be formulated with a context-relative observation/theory distinction. But this is compatible with seeing observations as potentially open to contention between different theories (and in that sense "theory-laden"). The question is, are these distinctions robust enough to support your argument?

By the way, I'm confused by your use of "cognitive states" in your 08 July comment above, where it is contrasted to beliefs and desires. Did you mean neural states?

Temperature was underdetermined in the early stages of science because the nebulae of associated phenomena had not been sorted out. Sometimes different methods of assessing temperature could conflict. E.g., object A might be hotter to the touch than B, yet when both are placed in contact with C, B warms C and A cools it.

Comment author: BobTheBob 14 July 2011 05:19:05PM 0 points [-]

I'm confused by your use of "cognitive states" in your 08 July comment above

You are quite right -sorry about the confusion. I meant to say behaviour and computational states -the thought being that we are trying to correlate having a belief or desire to being in some combination of these.

The question is, are these distinctions robust enough to support your argument?

I understand you're referring here to the claim -for which I can't take credit- that facts about behaviour underdetermine facts about beliefs and desires. Because the issue -or so I want to argue- is of underdetermination of one set of potential facts (namely, about beliefs and desires) by another (uninterpreted behaviour), rather than of underdetermination of scientific theory by fact or observation, I'm not seeing that the issue of the theory-ladenness of observation ultimately presents a problem.

The underdetermination is pretty easy to show, at least on a superficial level. Suppose you observe

  • a person, X, pluck an apple from a tree and eat it (facts about behaviour).

You infer:

  • X desires that s/he eat an apple, and X believes that if s/he plucks and bites this fruit, s/he will eat an apple.

But couldn't one also infer,

  • X desires that s/he eat a pear, and X believes (mistakenly) that if s/he plucks and bites this fruit, s/he will eat a pear.

or

  • X desires that s/he be healthy, and X believes that if s/he plucks and bites this fruit (whatever the heck it is), s/he will be healthy.

You may think that if you observe enough behaviour, you can constrain these possibilities. There are arguments (which I acknowledge I have not given), which show (or so a lot of people think) that this is not the case - the underdetermination keeps pace.

Comment author: Will_Sawin 05 July 2011 04:40:10AM 2 points [-]

You are suggesting that "want" implies "ought". One way to interpret this is as a sort of moral relativism - that if I want to punch babies, then I ought to punch babies. With this claim I would disagree strenuously.

But I'm guessing that this is not how you mean it. I'm not sure exactly what concepts of "want" and "ought" you are using.

One point you are making is that we can discuss "ought" in the context of any utility function. You do not seem to be making any claims about what utility function we should choose. You are instead arguing that there is no naturalistic source telling us what utility function to choose. You presumably wish us to pick one and discuss it?

Comment author: BobTheBob 05 July 2011 10:10:58PM 0 points [-]

You're right that I didn't mean this necessarily to be about a specifically moral sense of 'ought'. As for the suggested inference about baby-punching, I would push that onto the 'other things being equal' clause, which covers a multitude of sins. No acceptable theory can entail that one ought to be punching babies, I agree.

The picture I want to suggest should be taken seriously is that on one side of the fence are naturalistic properties, on the other are properties such as rationality, having wants/beliefs, being bound by norms as to right and wrong (where this can be as meagre a sense of right and wrong as "it is right to predicate 'cat' of this animal and wrong to predicate 'dog' of it" -or, "it is right to eat this round, red thing if you desire an apple, it is wrong to eat it if you desire a pear"), oughts, goals, values, and so on. And that the fence isn't as trivial to break down as one might think.

I'm understanding a utility function is something like a mapping of states of affairs (possible worlds?) onto, say, real numbers. In this context, the question would be giving naturalistic sense to the notion of value -that is, of something's being better or worse for the agent in question- which the numbers here are meant to correlate to. It's the notion of some states of affairs being more or less optimal for the agent -which I think is part of the concept of utility function- which I want to argue is outside the scope of natural science. Please correct me on utility functions if I've got the wrong end of the stick.

To be clear - the intent isn't to attack the idea that there can be interesting and fruitful theories involving utility functions, rationality and related notions. It's just that these aren't consistent with a certain view of science and facthood.

Comment author: Will_Sawin 05 July 2011 10:37:02PM 3 points [-]

I guess the main thing I want to suggest is that there is more than one fence that is hard/impossible to breach.

Furthermore that depending on how you define certain terms, some of the fences may or may not be breachable.

I'm also saying that non-natural "facts" are as easy to work with as natural facts. The issue that they're not part of natural science doesn't impact our ability to discuss them productively.

Comment author: BobTheBob 06 July 2011 04:43:51PM 0 points [-]

I'm also saying that non-natural "facts" are as easy to work with as natural facts. The issue that they're not part of natural science doesn't impact our ability to discuss them productively.

I agree entirely with this. This exercise isn't meant in any way to be an attack on decision theory or the likes. The target is so-called naturalism - the view that all facts are natural facts.

Comment author: Will_Sawin 06 July 2011 11:43:05PM 0 points [-]

I see. That makes sense.

Comment author: ata 05 July 2011 04:33:50AM *  2 points [-]

The idea that one cannot derive an "ought" from an "is" is so often asserted as a settled fact and so rarely actually argued by means other than historical difficulty or personal incredulity. I'd prefer it be stated without the chaotic inversion, if at all — not "one cannot derive an 'ought' from an 'is'", but "I don't know how to derive an 'ought' from an 'is'". In any case, have you read the metaethics sequence? A lot of people seem to disagree, but I found that it mostly resolved/dissolved this problem to my satisfaction (you know, that wonderful feeling when you can look back upon your past self's thoughts on the matter and find them so confused and foreign that you can barely even empathize with the state of mind that generated them anymore).

(Also, I find your claim that wanting is not a naturalistic property baffling, and your argument for that also seems to boil down to personal incredulity (I don't know how to explain wanting in reductive naturalistic terms -> it's impossible).)

Comment deleted 05 July 2011 10:00:03PM [-]
Comment author: BobTheBob 05 July 2011 10:08:21PM 0 points [-]

t may not be great, but I did give an argument. Roughly, again,

a) wants do entail oughts (plausible)

b) wanting = being in unproblematically naturalistic state ABC (from assumption of naturalism)

c) from a and b, there is some true statement of the form 'being in naturalistic state ABC entails an ought'

d) but no claim of the form 'being in naturalistic state ABC entails an ought' is plausible

I infer from the contradiction between c and d to the falsity of b. If you could formulate your dissatisfaction as a criticism of a premise or of the reasoning, I'd be happy to listen. In particular, if you can come up with a plausible counter-example to (d), I would like to hear it.

Comment author: Peterdjones 05 July 2011 07:31:03PM *  2 points [-]

To a first approximation, a belief is rational just in case you ought to hold it; an action rational just in case you ought to take it

A belief is rational just in case you rational-ought to hold it; an action rational just in case you rational-ought to take it..

Rational-ought beliefs and actions are the ones optimal for achieving your goals. Goals and optimallity can be explained in scientific language. Rational-ought is not moral-ought. Moral-ought is harder to explain because it about the goals an agent should have,not the ones they happen to have.

Looked at from a properly scientific point of view, is there any scope for the attribution of purposes or goals or the appraisal of our behaviour in any sense as right or wrong?

These are two different questions. The first is rational-ought, the second is moral-ought.

Try somehow to shoehorn normative facts into a naturalistic world-view, at the possible peril of the coherence of that world-view.

Easily done with non-moral norms such as rationality.

A thing's merely being in a certain physical or computational state does not imbue it with purpose

Not if you think of purpose as a metaphysical fundamental. Easily, if a purpose is just a particular idea in the mind. If I intend to but a lawnmower, and I write "buy lawnmower" on a piece of paper, there is nothing mysterious about the note, or about the state of mind that preceded it. Of course all this ease stems from the non-moral nature of what is being considered.

Wants do indeed imply oughts. Since there plausibly is no physical or computational state being in which implies there is anything one ought to do, wanting is not identical to being in a physical or computational state.

That you want to do something does not mean you ought to do it in the categorial, unconditional sense of moral-ought. The difficulty of reducing ethics does not affect non-ethical norms.

ETA

as to the naturalisation of morality...

http://pierrephilosophique.pbworks.com/w/page/41628626/Metaethics-for-Scientists

Comment author: BobTheBob 06 July 2011 01:48:58AM 0 points [-]

Rational-ought beliefs and actions are the ones optimal for achieving your goals. Goals and optimallity can be explained in scientific language. Rational-ought is not moral-ought. Moral-ought is harder to explain because it about the goals an agent should have,not the ones they happen to have.

I'm sincerely not sure about the rational-ought/moral-ought distinction - I haven't thought enough about it. But anyway, I think moral-ought is a red herring, here. As far as I can see, the claims made in the post apply to rational-oughts. That was certainly the intention. In other posts on LW about fallacies and the details of rational thinking, it's a commonplace to use quite normative language in connection with rationality. Indeed, a primary goal is to help people to think and behave more rationally, because this is seen for each of us to be a good. 'One ought not to procrastinate', 'One ought to compensate for one's biases', etc..

Try somehow to shoehorn normative facts into a naturalistic world-view, at the possible peril of the coherence of that world-view.

Easily done with non-moral norms such as rationality.

Would love to see the details... :-)

Not if you think of purpose as a metaphysical fundamental. Easily, if a purpose is just a particular idea in the mind. If I intend to but a lawnmower, and I write "buy lawnmower" on a piece of paper, there is nothing mysterious about the note, or about the state of mind that preceded it.

I'm not sure I get this. The intention behind drawing the initial distinction between is/ought problems was to make clear the focus is not on, as it were, the mind of the beholder. The question is a less specific variant of the question as to how any mere physical being comes to have intentions (e.g., to buy a lawnmower) in the first place.

That you want to do something does not mean you ought to do it in the categorial, unconditional sense of moral-ought.

I agree, but I think it does mean you ought to in a qualified sense. Your merely being in a physical or computational state, however, by itself doesn't, or so the thought goes.

Comment author: randallsquared 07 July 2011 01:19:26PM 2 points [-]

I'm sincerely not sure about the rational-ought/moral-ought distinction - I haven't thought enough about it. But anyway, I think moral-ought is a red herring, here. As far as I can see, the claims made in the post apply to rational-oughts.

Rational-oughts are just shortcuts for actions and patterns of actions which are most efficient for reaching your goals. Moral-oughts are about the goals themselves. Determining which actions are most efficient for reaching your goals is entirely naturalistic, and reduces ultimately to statements about what is. Moral-oughts reduce both to what is, and to what satisfies more important goals. The ultimate problem is that there's no way to justify a top-level goal. For a designed mind with a top-level goal, this is not actually a problem, since there's no way to reason one's way to a top-level goal change -- it can be taken as an 'is'. For entities without top-level goals, however, such as humans, this is a serious problem, since it means that there's no ultimate justification for any action at all, only interim justifications, the force of which grows weaker rather than stronger as you climb the reflective tower of goals.

Comment author: Peterdjones 07 July 2011 07:35:38PM 0 points [-]

The ultimate problem is that there's no way to justify a top-level goal. For a designed mind with a top-level goal, this is not actually a problem, since there's no way to reason one's way to a top-level goal change -- it can be taken as an 'is'.

That makes it easier for a designed mind to do rational-ought, but equally harder to do moral-ought.

Comment author: randallsquared 08 July 2011 01:36:21AM 0 points [-]

It might make it easier for a top-level-goal-haver (TLGH) to choose a rational-ought, since there can be no real conflict, but it doesn't necessarily make it easier to reason about, given such a goal. However, I'd say that it makes it much, much easier to do (what the TLGH sees as) moral-ought, since the TLGH presumably has a concrete top level goal, rather than having to figure it out (or have the illusion of trying to figure it out). The TLGH knows what the morally right thing to do is -- it's hardwired. Figuring out morality is harder when you don't already have a moral arrow preset for you.

That isn't to say that we'd agreed that a TLGH has the "correct" arrow of morality, but the TLGH can be completely sure that it does, since that's really what it means to have a top level goal. Any wondering about whether a TLGH did the right thing, by itself, will be rational-ought, not moral-ought.

Now, if you meant that it will be harder for it to act like we'd consider a moral entity, then I'd say (again, assuming a top level goal) that it will either do so, or it won't, but it won't be difficult to force itself to do the right thing. This also assumes such a straightforward goal-seeking design is possible for an intelligence. I don't have an opinion on that.

Comment author: Peterdjones 08 July 2011 12:32:30PM *  1 point [-]

The TLGH knows what the morally right thing to do is -- it's hardwired. Figuring out morality is harder when you don't already have a moral arrow preset for you.

Knowledge requires justificaiton. A TLGH that understands epssiemology wouild see itslef as not knowing its TLG, since "it was hardwired into me" is no justification. This applies to humans: we are capable of dounting that our evolutionarily derived moral attitudes arre the correct ones.

Evolutionary psychology tells us that our evolutionary history has given us certain moral attitudes and behaviour. So far, so good. Some scientifically minded types take this to constitute a theory of objective morality all in itself. However, that would be subject to the Open Question objection: we can ask of our inherited morality whether it is actually right. (Unrelatedly, we are probably not determined to follow it, since we can overcome strong evolutionary imperatives in, for instance, voluntay celibacy). This not a merely abstract issue either, since EP has been used to support some contentious claims; for instance, that men should be forgiven for adultery since it is "in their genes" to seek muliple partners.

Any wondering about whether a TLGH did the right thing, by itself, will be rational-ought, not moral-ought.

And if there is any kind of objective truth about which goals are the true top level goals, that is going to have to come from reasoning. Emipricism fails because there are no perceivable moral facts, and ordinary facts fall into the is-ought divide.

Rationality is probably better at removing goals than setting them, better at thous-shalt-nots than thou-shalts That is in line with the liberal-secular view of morality, where it would be strange and maybe even obnoxious for everyone to be pursuing the same aim.

Comment author: randallsquared 08 July 2011 03:42:44PM 0 points [-]

Knowledge requires justificaiton. A TLGH that understands epssiemology wouild see itslef as not knowing its TLG, since "it was hardwired into me" is no justification. This applies to humans: we are capable of dounting that our evolutionarily derived moral attitudes arre the correct ones.

This only applies to humans because we are not TLGHs. Beliefs and goals require justification because we might change them. Beliefs and goals which are hardwired do not require justification; they must be taken as given. As far as I'm aware, humans only ever have beliefs or goals that seem hardwired in this sense in the case of damage, like people with Capgras delusion.

However, that would be subject to the Open Question objection: we can ask of our inherited morality whether it is actually right. (Unrelatedly, we are probably not determined to follow it, since we can overcome strong evolutionary imperatives in, for instance, voluntary celibacy).

In fact, I would argue that we can only genuinely ask if our "inherited morality" is right because we are not determined to follow it.

Comment author: Peterdjones 08 July 2011 04:02:09PM 1 point [-]

This only applies to humans because we are not TLGHs. Beliefs and goals require justification because we might change them.

I said knowledge requires justification. I was appealing to the standard True Justified Belief theory of knowledge. That belief per se does not need justification is not relevant.

Comment author: randallsquared 08 July 2011 04:27:36PM 0 points [-]

So,

A TLGH that understands epistemology wouild see itself as not knowing its TLG, since "it was hardwired into me" is no justification.

So, it's no justification in this technical sense, and it might cheerfully agree that it doesn't "know" its TLG in this sense, but that's completely aside from the 100% certainty with which it holds it, a certainty which can be utterly unshakable by reason or argument.

I misunderstood what you were saying due to "justification" being a technical term, here. :)

Comment author: Peterdjones 07 July 2011 07:23:08PM *  1 point [-]

Would love to see the details... :-)

It's been sketched out several times already, by various people.

1.You have a set of goals (aposteriori "is") 2. You have a set of strategies for achieving goals with varying levels of efficiency (aposteriori "is") 3. Being rational is applying rationality to achieve goals optimally (analytical "is"), ie if you are want to be rational, you ought to optimise your UF.

Of course that isnt pure empiricism (what is?) because 3 is a sort of conceptual analysis of "oughtness". I am not bothered about that for a number of reasons: I am not commited to the insolubility of the is/ought gap, nor to the non existence of objective ethics.

I'm not sure I get this. The intention behind drawing the initial distinction between is/ought problems was to make clear the focus is not on, as it were, the mind of the beholder. The question is a less specific variant of the question as to how any mere physical being comes to have intentions (e.g., to buy a lawnmower) in the first place.

I don't see why the etiology of intentions should pose any more of a problem than the representation of intentions. You can build robots that seek out light sources. "seek light sources" is represented in its programming. It came from the progammer. Where's the problem?

I agree, but I think it does mean you ought to in a qualified sense.

But the qualified sense is easily explained as goal+strategy. You rational-ought to adopt strategies to achieve your goals.

Your merely being in a physical or computational state, however, by itself doesn't, or so the thought goes

Concrete facts about my goals and siituation, and abstract facts about which strategies achieve which goals are allt hat is needed to establish truths about rational-ought. What is unnaturalistic about that? The abstract facts about how strageties may be unnaturualisable in a sense, but it is a rather unimpactive sense. Abstract reasoning in general isn't (at least usefully) reducible to atoms, but that doesnt mean it is "about" some non physical realm. In a sense it isn't about anything, It just operates on its own level.

Comment author: Jonathan_Lee 05 July 2011 10:09:37AM 1 point [-]

The foundational problem in your thesis is that you have grounded "rationality" as a normative "ought" on beliefs or actions. I dispute that assertion.

Rationality is more reasonably grounded as selecting actions so as to satisfy your explicit or implicit desires. There is no normative force to statements of the form "action X is not rational", unpacked as "If your values fall into {large set of human-like values}, then action X is not optimal, choosing for all similar situations where the algorithm you use is run".

There may or may not be general facts about what it is "rational" for "people" to do; it depends rather crucially on how consistent terminal values are across the set of "people". Neglecting trade with Clippy, it is (probably) not rational for humans to convert Jupiter to paperclips. Clippy might disagree.

It should be clear that rational actions are predicated on terminal values, and do not carry normative connotations. Given terminal values, your means of selecting actions may be rational or otherwise. Again, this is not normative; it may be suboptimal.

Comment author: TrE 05 July 2011 02:13:09PM 0 points [-]
Comment author: Perplexed 05 July 2011 03:42:24PM *  1 point [-]

Consider then a virus particle ... Surely there is nothing in biochemistry, genetics or other science which implies there is anything our very particle ought to do. It's true that we may think of it as having the goal to replicate itself, and consider it to have made a mistake if it replicates itself inaccurately, but these conceptions do not issue from science. Any sense in which it ought to do something, or is wrong or mistaken in acting in a given way, is surely purely metaphorical (no?).

No. The distinction between those viral behaviors that tend to contribute to the virus replicating and those viral behaviors that do not contribute does issue from science. It is not a metaphor to call actions that detract from reproduction "mistakes" on the part of the virus, any more than it is a metaphor to call certain kinds of chemical reactions "exothermic". There is no 'open question' issue here - "mistake", like "exothermic", does not have any prior metaphysical meaning. We are free to define it as we wish, naturalistically.

So much for the practical ought, the version of ought for which ought not is called a mistake because it generates consequences contrary to the agent's interests. What about the moral ought, the version of ought for which ought not is called wrong? Can we also define this kind of ought naturalistically? I think that we can, because once again I deny that "wrong" has any prior metaphysical meaning. The trick is to make the new (by definition) meaning not clash too harshly with the existing metaphysical connotations.

How is this for a first attempt at a naturalistic definition of the moral ought as a subset of the practical ought? An agent morally ought not to do something iff it tends to generate consequences contrary to the agent's interests, those negative consequences arising from the reactions of disapproval coming from other agents.

In general, it is not difficult at all to define either kind of ought naturalistically, so long as one is not already metaphysically committed to the notion that the word 'ought' has a prior metaphysical meaning.

Comment author: [deleted] 06 July 2011 12:35:10AM 4 points [-]

There is no 'open question' issue here - "mistake", like "exothermic", does not have any prior metaphysical meaning. We are free to define it as we wish, naturalistically.

I'm having trouble with the word "metaphysical". In order for me to make sense of the claim that "mistake" and "exothermic" do not have prior metaphysical meanings, I would like to see some examples of words that do have prior metaphysical meanings, so that I can try to figure out from contrasting examples of having and not having prior metaphysical meanings what it means to have a prior metaphysical meaning. Because at the moment I don't know what you're talking about.

Comment author: Perplexed 06 July 2011 12:52:43AM 0 points [-]

Hmmm. I may be using "metaphysical" inappropriately here. I confess that I am currently reading something that uses "metaphysical" as a general term of deprecation, so some of that may have worn off. :)

Let me try to answer your excellent question by analogy to geometry, without abandoning "metaphysical". As is well known, in geometry, many technical terms are given definitions, but it is impossible to define every technical term. Some terms (point, line, and on are examples) are left undefined, though their meanings is supplied implicitly by way of axioms. Undefined terms in mathematics correspond (in this analogy) to words with prior metaphysical meaning in philosophical discourse. You can't define them, because their meaning is somehow "built in".

To give a rather trivial example, when trying to generate a naturalistic definition of ought, we usually assume we have a prior metaphysical meaning for is.

Hope that helped.

Comment author: Peterdjones 05 July 2011 08:17:47PM 2 points [-]

An agent morally ought not to do something iff it tends to generate consequences contrary to the agent's interests, those negative consequences arising from the reactions of disapproval coming from other agents.

That doesn't work. It would mean conformists are always in the right, irrespective of what they are conforming to.

Comment author: Perplexed 05 July 2011 11:06:36PM 0 points [-]

As you may have noticed, that definition was labeled as a "first attempt". It captures some of our intuitions about morality, but not all. In particular, its biggest weakness is that it fails to satisfy moral realists for precisely the reason you point out.

I have a second quill in my quiver. But before using it, I'm going to split the concept of morality into two pieces. One piece is called "de facto morality". I claim that the definition I provided in the grandparent is a proper reductionist definition of de facto morality and captures many of (some) people's intuitions about morality. The second piece is called "ideal morality". This piece is essentially what de facto morality ought to be.

So, your conformist may well be automatically in the right with respect to de facto morality. But it is possible for a moral reformer to point out that he and all of his fellows are in the wrong with respect to ideal morality. That is, the reformer claims that the society would be better off if its de facto conventions were amended from their present unsatisfactory status to become more like the ideal. And, I claim, given the right definition of "society would be better off", this "ideal morality" can be given an objective and naturalistic definition.

For more details, see Binmore - Game Theory and the Social Contract

Comment author: AdeleneDawner 05 July 2011 08:25:38PM 0 points [-]

Not exactly. It means that conformists are never morally wrong, unless some group (probably one that they're not conforming with) punishes them for conforming. They can be morally neutral when conforming, and may be rationally wrong at the same time.

Comment author: torekp 06 July 2011 02:01:27AM 1 point [-]

I think this is right, except possibly for the part about no prior metaphysical meaning. The later explanation of that part didn't clarify it for me. Instead, I'll just indicate what prior meaning I find attached to the idea that "the virus replicated wrongly."

In biology, the idea that organs and behaviors and so on have functions is quite common and useful. The novice medical student can make many correct inferences about the heart by supposing that its function is to pump blood, for example. The idea preceded Darwin, but post-Darwin, we can give a proper naturalistic reduction for it. Roughly speaking, an organ's function is F iff in the ancestral environment, the organ's performance of F is what it was selected for. Various RNA features in a virus might have functions in this sense, and if so, that gives the meaning of saying that in a particular case, the viral reproduction mechanism failed to operate correctly.

That's not a moral norm. It's not even the kind of norm relating to an agent's interests, in my view. But it is a norm.

There was a pre-existing meaning of "biological function" before Darwin came around. So, a Darwinian definition of biological function was not a purely stipulative one. It succeeded only because it captured enough of the tentatively or firmly accepted notions about "biological function" to make reasonably good sense of all that.

Comment author: Perplexed 06 July 2011 05:31:38AM 0 points [-]

... except possibly for the part about no prior metaphysical meaning.

I think I see the source of the difficulty now. My fault. BobTheBob mentioned the mistake of replicating with errors. I took this to be just one example of a possible mistake by a virus, and thought of several more - inserting into the wrong species of host, for example, or perhaps incorporating an instance of the wrong peptide into the viral shell after replicating the viral genome.

I then sought to define 'mistake' to capture the common fitness-lowering feature of all these possible mistakes. However, I did not make clear what I was doing and my readers naturally thought I was still dealing with a replication error as the only kind of mistake.

Sorry to have caused this confusion.

Comment author: BobTheBob 05 July 2011 10:32:38PM 1 point [-]

If I bet higher than 1/6th on a fair die's rolling 6 because in the last ten rolls 6 hasn't come up -meaning it's now 'due'- I make a mistake. I commit an error of reasoning; I do something wrong; I act in a manner I ought not to.

What about the virus particle which, in the course of sloshing about in an appropriate medium, participates in the coming into existence of a particle composed of RNA which, as it happens, is mostly identical but differs from itself in a few places. Are you saying that this particle makes a mistake in the same sense of 'mistake' as I do in making my bet?

Option (1): The sense is precisely the same (and it is unproblematically naturalistic). In this case I have to ask what the principles are by which one infers to conclusions about a virus's mistakes from facts about replication. What are the physical laws, how are their consequences (the consequences, again, being claims about what a virus ought to do) measured or verified, and so on?

Option (2): The senses are different. This was the point of calling the RNA mistake metaphorical. It was to convey that the sense is importantly different than it is in the betting case. The idea is that the sense, if any, in which a virus makes a 'mistake' in giving rise to a non-exact replica of itself is not enough to sustain the kind of norms required for rationality. It is not enough to sustain the conclusions about my betting behaviour. Is this fair?

Comment author: Perplexed 06 July 2011 12:18:02AM 2 points [-]

Is this fair?

Not really. You started by making an argument that listed a series of stages (virus, bacterium, nematode, man) and claimed that at no stage along the way (before the last) were any kind of normative concepts applicable. Then, when I suggested the standard evolutionary explanation for the illusion of teleology in nature, you shifted the playing field. In option 1, you demand that I supply standard scientific expositions of the natural history of your chosen biological examples. In option 2 you suggest that you were just kidding in even mentioning viruses, bacteria and nematodes. Unless an organism has the cognitive equipment to make mistakes in probability theory, you simply are not interested in speaking about it normatively.

Do I understand that you are claiming that humans are qualitatively exceptional in the animal kingdom because the word "ought" is uniquely applicable to humans? If so, let me suggest a parallel sequence to the one you suggested starting from viruses. Zygote, blastula, fetus, infant, toddler, teenager, adult. Do you believe it is possible to tell a teenager what she "ought" to do? At what stage in development do normative judgements become applicable.

Here is a cite for sorites. Couldn't resist the pun.

Comment author: BobTheBob 06 July 2011 03:18:25AM 1 point [-]

I appreciate your efforts to spell things out. I have to say I'm getting confused, though

You started by making an argument that listed a series of stages (virus, bacterium, nematode, man) and claimed that at no stage along the way (before the last) were any kind of normative concepts applicable.

I meant to say that at no stage -including the last!- does the addition of merely naturalistic properties turn a thing into something subject to norms -something of which it is right to say it ought, for its own sake, to do this or that.

I also said that the sense of right and wrong and of purpose which biology provides is merely metaphorical. When you talk about "the illusion of teleology in nature", that's exactly what I was getting at (or so it seems to me). That is, teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not - it's real. Can you live with this? I think a lot of people are apt to think that illusory teleology sort of fades into the real thing with increasing physical complexity. I see the pull of this idea, but I think it's mistaken, and I hope I've at least suggested that adherents of the view have some burden to try to defend it.

Do you believe it is possible to tell a teenager what she "ought" to do?

Now that is a whole other can of worms...

At what stage in development do normative judgements become applicable.

This is a fair and a difficult question. Roughly, another individual becomes suitable for normative appraisal when and to the extent that s/he becomes a recognizably rational agent -ie, capable of thinking and acting for her/himself and contributing to society (again, very roughly). All kinds of interesting moral issues lurk here, but I don't think we have to jump to any conclusions about them.

In case I'm giving the wrong impression, I don't mean to be implying that people are bound by norms in virtue of possessing some special aura or other spookiness. I'm not giving a theory of the nature of norms - that's just too hard. All I'm saying for the moment is that if you stick to purely natural science, you won't find a place for them.

Comment author: timtyler 06 July 2011 12:30:47PM *  1 point [-]

When you talk about "the illusion of teleology in nature", that's exactly what I was getting at (or so it seems to me). That is, teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not - it's real. Can you live with this?

The usual trick is to just call it teleonomy. Teleonomy is teleology with smart pants on.

Comment author: BobTheBob 06 July 2011 04:45:42PM 0 points [-]

Thanks for this - I hadn't encountered this concept. Looks very useful.

Comment author: timtyler 06 July 2011 05:18:40PM *  0 points [-]

Similar is the Dawkins distinction between designed and designoid objects. Personally I was OK with "teleonomy" and "designed". Biologists get pushed into this sort of thing by the literal-minded nit-pickers.

Comment author: Perplexed 06 July 2011 04:54:30AM 0 points [-]

...teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not - it's real. Can you live with this?

No, I cannot. It presumes (or is it argues?) that human rationality is not part of nature.

My apologies for using the phrase "illusion of teleology in nature". It seems to have created confusion. Tabooing that use of the word "teleology", what I really meant was the illusion that living things were fashioned by some rational agent for some purpose of that agent. Tabooing your use of the word, on the other hand, in your phrase "the kind of teleology needed to make sense of rationality" leads elsewhere. I would taboo and translate that use to yield something like "To make sense of rationality in an agent, one needs to accept/assume/stipulate that the agent sometimes acts with a purpose in mind. We need to understand 'purpose', in that sense, to understand rationality."

Now if this is what you mean, then I agree with you. But I think I understand this kind of purpose, identifying it as the cognitive version of something like "being instrumental to survival and reproduction". That is, it is possible for an outside observer to point to behaviors or features of a virus that are instrumental to viral survival and reproduction. At the level of a bacterium, there are second-messenger chemicals that symbolize or represent situations that are instrumental to survival and reproduction. At the level of the nematode, there are neuron firings serving as symbols. At the level of a human the symbols can be vocalizations: "I'm horny; how about you?". I don't see anything transcendently new at any stage in this progression, nor in the developmental progression that I offered as a substitute.

In case I'm giving the wrong impression, I don't mean to be implying that people are bound by norms in virtue of possessing some special aura or other spookiness. I'm not giving a theory of the nature of norms - that's just too hard. All I'm saying for the moment is that if you stick to purely natural science, you won't find a place for them.

Let me try putting that in different words: "Norms are in the eye of the beholder. Natural science tries to be objective - to avoid observer effects. But that is not possible when studying rationality. It requires a different, non-reductionist and observer dependent way of looking at the subject matter." If that is what you are saying, I may come close to agreeing with you. But somehow, I don't think that is what you are saying.

Comment author: BobTheBob 07 July 2011 03:56:17AM 0 points [-]

I would taboo and translate that use to yield something like "To make sense of rationality in an agent, one needs to accept/assume/stipulate that the agent sometimes acts with a purpose in mind. We need to understand 'purpose', in that sense, to understand rationality."

Thanks, yes. This is very clear. I can buy this.

But I think I understand this kind of purpose, identifying it as the cognitive version of something like "being instrumental to survival and reproduction". That is, it is possible for an outside observer to point to behaviors or features of a virus that are instrumental to viral survival and reproduction.

Sorry if I'm slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They're the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are 'merely' teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order.

Here's one more crack at trying to motivate this, using very evidently non-scientific terms. On the one hand, I submit that you cannot make sense of a thing (human, animal, AI, whatever) as rational unless there is something that it cares about. Unless that is, there is something which matters or is important to it (this something can be as simple as survival or reproduction). You may not like to see a respectable concept like rationality consorting with such waffly notions, but there you have it. Please object to this if you think it's false.

On the other hand, nothing in nature implies that anything matters (etc) to a thing. You can show me all of the behavioural/cognitive correlates of X's mattering to a thing, or of a thing's caring about X, and provide me detailed evolutionary explanations of the behavioural correlates' presence, but these correlates simply do not add up to the thing's actually caring about X. X's being important to a thing, X's mattering, is more than a question of mere behaviour or computation. Again, if this seems false, please say.

If both hands seem false, I'd be interested to hear that, too.

At the level of a bacterium, there are second-messenger chemicals that symbolize or represent situations that are instrumental to survival and reproduction. At the level of the nematode, there are neuron firings serving as symbols. At the level of a human the symbols can be vocalizations: "I'm horny; how about you?". I don't see anything transcendently new at any stage in this progression, nor in the developmental progression that I offered as a substitute.

As soon as we start to talk about symbols and representation, I'm concerned that a whole new set of very thorny issues get introduced. I will shy away from these.

Let me try putting that in different words: "Norms are in the eye of the beholder. Natural science tries to be objective - to avoid observer effects. But that is not possible when studying rationality. It requires a different, non-reductionist and observer dependent way of looking at the subject matter." If that is what you are saying, I may come close to agreeing with you. But somehow, I don't think that is what you are saying.

"It requires a different, non-reductionist ... way of looking at the subject matter." -I can agree with you completely on this. (I do want however to resist the subjective, "observer dependent" part )

Comment author: Peterdjones 07 July 2011 08:50:32PM 1 point [-]

On the other hand, nothing in nature implies that anything matters (etc) to a thing. You can show me all of the behavioural/cognitive correlates of X's mattering to a thing, or of a thing's caring about X, and provide me detailed evolutionary explanations of the behavioural correlates' presence, but these correlates simply do not add up to the thing's actually caring about X. X's being important to a thing, X's mattering, is more than a question of mere behaviour or computation

What is missing? A quale?

Comment author: timtyler 07 July 2011 08:14:55AM *  1 point [-]

Sorry if I'm slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They're the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are 'merely' teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order.

Humans have brains, and can better represent future goal states. However, "purpose" in nature ultimately comes from an optimisation algorithm. That is usually differential reproductive success. Human brains run their own optimisation algorithm - but it was built by and reflects the goals of the reproducers that built it. I would be reluctant to dis bacterial purposes. They are trying to steer the future too - it is just that they are not so good at it.

Comment author: BobTheBob 07 July 2011 05:54:05PM *  0 points [-]

You use a fair bit of normative, teleological vocabulary, here: 'purpose', 'goal', 'success', 'optimisation', 'trying', being 'good' at 'steering' the future. I understand your point is that these terms can all be cashed out in unproblematic, teleonomic terms, and that this is more or less an end of the matter. Nothing dubious going on here. Is it fair to say, though, that this does not really engage my point, which is that such teleonomic substitutes are insufficient to make sense of rationality?

To make sense of rationality, we need claims such as,

  • One ought to rank probabilities of events in accordance with the dictates of probability theory (or some more elegant statement to the effect).

If you translate this statement, substituting for 'ought' the details of the teleonomic 'ersatz' correlate, you get a very complicated statement about what one likely will do in different circumstances, and possibly about one's ancestor's behaviours and their relation to those ancestors' survival chances (all with no norms).

This latter complicated statement will not mean what the first statement means, and won't do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what's needed is a prescription.

Probably none of this should matter to someone doing biology, or for that matter decision theory. But if you want to go beyond and commit to a doctrine like naturalism or physical reductionism, then I submit this does become relevant.

Comment author: Peterdjones 07 July 2011 08:40:23PM 1 point [-]

This latter complicated statement will not mean what the first statement means, and won't do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what's needed is a prescription.

Do you accept that a description of what an ideal agent does is equivalent to a prescription of what a non-ideal agent (of the same goals) should do?

Comment author: BobTheBob 08 July 2011 04:31:38AM 0 points [-]

This is a nice way of putting things. As long as we're clear that what makes it a prescription is the fact that it is an ideal for the non-ideal agent.

Do you think this helps the cause of naturalism?

Comment author: Perplexed 07 July 2011 05:19:42AM 0 points [-]

I have tried to suggest that bacterial purposes are 'merely' teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order. ...

As soon as we start to talk about symbols and representation, I'm concerned that a whole new set of very thorny issues get introduced. I will shy away from these.

My position is that, to the extent that the notion of purpose is at all spooky, that spookiness was already present in a virus. The profound part of teleology is already there in teleonomy.

Which is not so say that humans are different from viruses only in degree. The are different in quality with regard to some other issues involved in rationality. Cognitive issues. Symbol processing issues. Issues of intentionality. But not issues of pure purpose and telos. So why don't you and I just shy away from this conversation. We've both stated our positions with sufficient clarity, I think.

Comment author: timtyler 06 July 2011 08:35:31AM *  0 points [-]

Can we also define this kind of ought naturalistically? I think that we can, because once again I deny that "wrong" has any prior metaphysical meaning. The trick is to make the new (by definition) meaning not clash too harshly with the existing metaphysical connotations

The main trick seems to be getting people to agree on a definition. For instance this:.

How is this for a first attempt at a naturalistic definition of the moral ought as a subset of the practical ought? An agent morally ought not to do something iff it tends to generate consequences contrary to the agent's interests, those negative consequences arising from the reactions of disapproval coming from other agents.

...aims rather low. That just tells people to do what they would do anyway. Part of the social function of morality is to give people an ideal to personally aim towards. Another part of the social function of morality is to provide people with an ideal form of behaviour, in order to manipulate others into behaving "better". Another part of the social function of morality is to allow people to signal their goodness by broadcasting their moral code. Done right, that makes them seem more trustworthy and predictable. Your proposal does not score very well on these fronts.

Comment author: Douglas_Reay 19 February 2012 03:01:07PM 0 points [-]

| The foregoing thoughts are hardly original. David Hume is famous for having observed that ought cannot be derived from is

See also the article From 'Is' to 'Ought', by Douglas_Reay