On the face of it, there is a tension in adhering both to the idea that there are facts about what it's rational for people to do and to the idea that natural or scientific facts are all the facts there are. The aim of this post is just to try to make clear why this should be so, and hopefully to get feedback on what people think of the tension.

In short

To a first approximation, a belief is rational just in case you ought to hold it; an action rational just in case you ought to take it. A person is rational to the extent that she believes and does what she ought to. Being rational, it is fair to say, is a normative or prescriptive property, as opposed to a merely descriptive one. Natural science, on the other hand, is concerned merely with descriptive properties of things -what they weigh, how they are composed, how they move, and so on. On the face of it, being rational is not the sort of property about which we can theorize scientifically (that is, in the vocabulary of the natural sciences). To put the point another way, rationality concerns what a thing (agent) ought to do, natural science concerns only what it is and will do, and one cannot deduce 'ought' from 'is'.

At greater length

There are at least two is/ought problems, or maybe two ways of thinking about the is/ought problem. The first problem (or way of thinking about the one problem) is posed from a subjective point of view. I am aware that things are a certain way, and that I am disposed to take some course of action, but neither of these things implies that I ought to take any course of action -neither, that is, implies that taking a given course of action would in any sense be right. How do I justify the thought that any given action is the one I ought to take? Or, taking the thought one step further, how, attending only to my own thoughts, do I differentiate merely being inclined to do something from being bound by some kind of rule or principle or norm, to do something?

This is an interesting question -one which gets to the very core of the concept of being justified, and hence of being rational (rational beliefs being justified beliefs). But it isn't the problem of interest here.

The second problem, the problem of interest, is evident from a purely objective, scientific point of view. Consider a lowly rock. By empirical investigation, we can learn its mass, its density, its mineralogical composition, and any number of other properties. Now, left to their own devices, rocks don't do much of anything, comparatively speaking, so it isn't surprising that we don't expect there to be anything it ought to do. In any case, natural science does not imply there is anything it ought to do, I think most will agree.

Consider then a virus particle - a complex of RNA and ancillary molecules. Natural science can tell us how it wiil behave in various circumstances -whether and how it will replicate itself, and so on- but once again surely there is nothing in biochemistry, genetics or other science which implies there is anything our very particle ought to do. It's true that we may think of it as having the goal to replicate itself, and consider it to have made a mistake if it replicates itself inaccurately, but these conceptions do not issue from science. Any sense in which it ought to do something, or is wrong or mistaken in acting in a given way, is surely purely metaphorical (no?).

How about a bacterium? It's orders of magnitude more complicated, but I don't see that matters are any different as regards what it ought to do. Science has nothing to tell us about what if anything is important to a bacterium, as distinct from what it will tend to do.

Moving up the evolutionary ladder, does the introduction of nervous systems make any difference? What do we think about, say, nematodes or even horseshoe crabs? The feedback mechanisms underlying the self-regulatory processes in such animals may be leaps and bounds more sophisticated than in their non-neural forebears, but it's far from clear how such increasing complexity could introduce goals.

To cut to the chase, how can matters be any different with the members of Homo sapiens ? Looked at from a properly scientific point of view, is there any scope for the attribution of purposes or goals or the appraisal of our behaviour in any sense as right or wrong? I submit that a mere increase in complexity -even if by many orders of magnitude- does not turn the trick. To be clear, I'm not claiming there are no such facts -far from it- just that these facts cannot be articulated in the language of purely natural science.

The upshot

The foregoing thoughts are hardly original. David Hume is famous for having observed that ought cannot be derived from is:

In every system of morality, which I have hitherto met with, I have always remark'd, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surpriz'd to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it shou'd be observ'd and explain'd; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention wou'd subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relation of objects, nor is perceiv'd by reason. -David Hume, A Treatise of Human Nature (London: Penguin, 1984) p. 521.

(the issue, together with this quote are touched on with a different point of view in this post of lukeprog's). I think they need facing up to. I see three options:

Option (1): Accept the basic point, stick resolutely to naturalism, and deny that there are any facts as to what it is rational for any given member of Homo sapiens to do. In other words, become an eliminativist about all normative concepts. Paul Churchland, I understand, is an example of an exponent of this position (or something more nuanced along these lines).

Option (2): Reject the argument above on one or another ground. Try somehow to shoehorn normative facts into a naturalistic world-view, at the possible peril of the coherence of that world-view. I acknowledge that this can be a valiant undertaking for those whose commitments suggest it. Any who embark on it should be aware that there is at least a half a century's worth of beleaguered efforts to put this Humpty together -it is not an easy task. One might want to start with the likes of Ruth Garrett Millikan or Jerry Fodor, then their various critics.

Option (3): Accept the argument above and reconcile yourself to the existence of mutually incommensurable but indispensable understandings of yourself. Be happy.

One response

This is already ponderously long, but there is one response worth anticipating, namely, that oughts can be inferred from wants (or preferences or utility functions), and that wants (preferences, utility functions) are naturalistic. The problem here is with the second conjunct - wants (etc.) are not naturalistic. At least, not obviously so (and, incidentally, the same fate befalls beliefs). My explanation is as follows.

An example of the thinking behind this proposal presumably would be something like,

P) X's wanting that X eats an apple entails, other things being equal, that X ought to eat an apple.

The force of naturalism or physicalism in this context is presumably a commitment to some empirically testable analysis concerning wants comparable to "Water is H2O", e.g.

  • to want that one eats an apple = to be in brain state ABC

or

  • to want that one eats an apple = to be composed in part of a some structure (brain or other) which implements a Turing machine which is in computational state DEF

or ...

Now, if both of these thoughts (the thought about wants entailing oughts and the thought about there being an empirically testable analysis) are correct, then it should be possible to substitute the analysis into (P):

P') That X's brain is in state ABC entails, other things being equal, that X ought to eat an apple.

or

P'') That X is composed in part of some structure which implements a Turing machine which is in computational state DEF entails, other things being equal, that X ought to eat an apple.

or... (your favourite theory here)

I submit that neither P' nor P'' is at all plausible, for the reasons reviewed above. A thing's merely being in a certain physical or computational state does not imbue it with purpose. Such facts do not entail that there is anything which matters to it, or which makes anything right or wrong for the thing. Concerning P'', note that although computers are sometimes thought of as having purposes in virtue of our having designed them (the computer 'makes a mistake' when it calculates an incorrect value), there is not normally thought to be any sense in which they have intrinsic purposes independent of ours, as the view under scrutiny would require of them.

There are all kinds of possible refinements of P' and P'' (P' and P'' are commonly viewed as non-starters anyway, owing to the plausibility of so-called semantic externalism which they ignore). My question is whether any more refined option shows a way to defeat the objection being raised here.

Wants do indeed imply oughts. Since there plausibly is no physical or computational state being in which implies there is anything one ought to do, wanting is not identical to being in a physical or computational state.

New Comment
73 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]ata50

The idea that one cannot derive an "ought" from an "is" is so often asserted as a settled fact and so rarely actually argued by means other than historical difficulty or personal incredulity. I'd prefer it be stated without the chaotic inversion, if at all — not "one cannot derive an 'ought' from an 'is'", but "I don't know how to derive an 'ought' from an 'is'". In any case, have you read the metaethics sequence? A lot of people seem to disagree, but I found that it mostly resolved/dissolved this problem to my satis... (read more)

0[anonymous]
It may not be great, but I did give an argument. Roughly, again, a) wants do entail oughts (plausible) b) wanting = being in unproblematically naturalistic state ABC (from assumption of naturalism) c) from a and b, there is some true statement of the form 'being in naturalistic state ABC entails an ought' d) but no claim of the form 'being in naturalistic state ABC entails an ought' is plausible I infer from the contradiction between c and d to the falsity of b. If you could formulate your dissatisfaction as a criticism of a premise or of the reasoning, I'd be happy to listen. In particular, if you can come up with a plausible counter-example to (d), I would like to hear it.
0BobTheBob
t may not be great, but I did give an argument. Roughly, again, a) wants do entail oughts (plausible) b) wanting = being in unproblematically naturalistic state ABC (from assumption of naturalism) c) from a and b, there is some true statement of the form 'being in naturalistic state ABC entails an ought' d) but no claim of the form 'being in naturalistic state ABC entails an ought' is plausible I infer from the contradiction between c and d to the falsity of b. If you could formulate your dissatisfaction as a criticism of a premise or of the reasoning, I'd be happy to listen. In particular, if you can come up with a plausible counter-example to (d), I would like to hear it.

The problem here is with the second conjunct - wants (etc.) are not naturalistic.

Suppose I hear Bob say "I want to eat an apple." Am I justified in assigning a higher probability to "Bob wants to eat an apple" after I hear this than before (assuming I don't have some other evidence to the contrary, like someone is holding a gun to Bob's head)? Assuming the answer is yes, and given that "Bob said 'I want to eat an apple.'" is a naturalistic fact, how did I learn something non-naturalistic from that?

0BobTheBob
I think this question hits the nail on the head. You are justified in assigning a higher probability to "Bob wants to eat an apple" just in case you are already justified in taking Bob to be a rational agent (other things being equal...). If Bob isn't at least minimally rational, you can't even get so far as construing his words as English, let alone to trust that his intent in uttering them is to convey that he wants to eat an apple (think about assessing a wannabe AI chatbot, here). But in taking Bob to be rational, you are already taking him to have preferences and beliefs, and for there to be things which he ought or ought not to do. In other words, you have already crossed beyond what mere natural science provides for. This, anyway, is what I'm trying to argue.
5Wei Dai
I think I kind of see what you're getting at. In order to recognize that Bob is rational, I have to have some way of knowing the properties of rationality, and the way we learn such properties does not seem to resemble the methods of the empirical sciences, like physics or chemistry. But to me it does seems to bear some resemblance to the methods of mathematics. For example in number theory we try to capture some of our intuitions about "natural numbers" in a set of axioms, which then allows us to derive other properties of natural numbers. In the study of rationality we have for example Von Neumann–Morgenstern axioms. Although there is much more disagreement about what an appropriate set of axioms might be where rationality is concerned, the basic methodology still seems similar. Do you agree?
0BobTheBob
I appreciate this clarification. The point is meant to be about, as you say, the empirical sciences. I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour - just that these theories aren't straight-forwardly continuous with the theories of natural science. In case you haven't encountered it and might be interested, the underdetermination problem associated with inferring to beliefs and desires from mere behaviour has been considered in some depth by Donald Davidson, eg in his Essays on Actions and Events
1Wei Dai
In your OP, you wrote that you found statements like implausible. It seems quite possible to me that, perhaps through some method other than the methods of the empirical sciences (for example, through philosophical inquiry), we can determine that among the properties of "want" is that "want to eat an apple" correctly describes the brain state ABC or computational state DEF (or something of that nature). Do you still consider that statement implausible?
0BobTheBob
This seems reasonable, but I have to ask about "correctly describes". The statement "want to eat an apple" implies being in brain state ABC or computational state DEF (or something of that nature) is plausible to me. I think the reverse implication, though raises a problem: being in brain state ABC or computational state DEF (or something of that nature) implies "want to eat an apple" But maybe neither of these is what you have in mind?
0Wei Dai
I think I mean the latter. What problem do you see with it?
0BobTheBob
I do accept that 'wants' imply 'oughts'. It's an oversimplification, but the thought is that statements such as * X's wanting that X eat an apple implies (many other things being equal) that X ought to eat an apple. are intuitively plausible. If wanting carries no implications for what one ought to do, I don't see how motivation can get off the ground. Now, if we have 1) wanting that P implies one ought to do Q, and 2) being in physical state ABC implies wanting that P then, by transitivity of implication, we get 3) being in physical state ABC implies one ought to do Q And this is just the kind of implication I'm trying to show is problematic.
0Wei Dai
Would it be fair to say that your position is that there could be two physically identical brains, and one of them wants to eat an apple but the other doesn't, or perhaps that one of them is rational but the other isn't. In other words that preference-zombies or rationality-zombies could exist? (In case it's not clear why I'm saying this, this is what accepting while denying would imply.)
0BobTheBob
I think your question again gets right to the nub of the matter. I have no snappy answer to the challenge -here is my long-winded response. The zombie analogy is a good one. I understand it's meant just as an analogy -the intent is not to fall into the qualia quagmire. The thought is that from a purely naturalistic perspective, people can only properly be seen as, as you put it, preference- or rationality-zombies. The issue here is the validity of identity claims of the form, * Wanting that P = being in brain state ABC My answer is to compare them to the fate of identity claims relating to sensations (qualia again), such as * Having sensation S (eg, being in pain) = being in brain state DEF Suppose being in pain is found empirically always to correlate to being in brain state DEF, and the identity is proposed. Qualiaphiles will object, saying that this identity misses what's crucial to pain, viz, how it feels. The qualiaphile's thought can be defended by considering the logic of identity claims generally (this adapted from Saul Kripke's Naming and Necessity). Scientific identity claims are necessary - if water = H2O in this world, then water = H2O in all possible worlds. That is, because water is a natural kind, whatever it is, it couldn't have been anything else. It is possible for water to present itself to us in a different phenomenal aspect ('ice9'!), but this is OK because what's essential to water is its underlying structure, not its phenomenal properties. The situation is different for pain - what's essential to pain is its phenomenal properties. Because pain essentially feels like this (so the story goes), it's correlation with being in brain state DEF can only be contingent. Since identities of this kind, if true, are by their natures necessary, the identity is false. There is a further step (lots of steps, I admit) to rationality. The thought is that our access to people's rationality is 'direct' in the way our access to pain is. The unmediated j
0Peterdjones
It is still not clear whether you think rationality is analogous to qualia or is a quale.
0BobTheBob
I think the formal similarities of some aspects of arguments about qualia on the one hand and rationality on the other, are the extent of the similarities. I haven't followed all the recent discussions on qualia, so I'm not sure where you stand, but personally, I cannot make sense of the concept of qualia. Rationality-involving concepts (among them beliefs and desires), though, are absolutely indispensable. So I don't think the rationality issue resolves into one about qualia. I appreciated your first July 07 comment about the details as to how norms can be naturalized and started to respond, then noticed the sound of a broken record. Going round one more time, to me it boils down to what Hume took to be obvious: * What you ought to do is distinct from what you will do. * Natural science can tell you at best what you will do. * Natural science can't tell you what you ought to do. It is surprising to me there is so much resistance (I mean, from many people, not just yourself) to this train of thought. When you say in that earlier comment 'You have a set of goals...', you have already, in my view, crossed out of natural science. What natural science sees is just what it is your propensity to do, and that is not the same thing as a goal.
-1Peterdjones
Rationality uncontroversially involves rules and goals, both of which are naturalisable. You have said there is an extra ingredient of "caring", which sound qualia-like. Not in all cases surely? What would an is/ought gap be when behaviour matched the ideal That depends on what you mean by 'can'. All the information about the intentions and consequences of your actions is encoded in a total physical picture of the universe. Where else would it be? OTOH, natural science, in practice,cannot produce that answer. Natural science is not limited to behaviour: it can peak inside a black box and see that a certain goal is encoded into it.even it it is not being achieved.
-1Peterdjones
I don't see the problem with the latter either.
0torekp
There are underdetermination problems all over the philosophy of science. I don't see how this poses a special problem for norms, or rationality. When two domains of science are integrated, it is often via proposed bridge laws that may not provide an exactly intuitive match. For example, some of the gases that have a high "temperature" when that is defined as mean kinetic energy, might feel somewhat colder than some others with lower "temperature". But we accept the reduction provided it succeeds well enough. If there are no perfect conceptual matches by definitions of a to-be-reduced term in the terms of the reducing domain, that is not fatal. If we can't find one now, that is even less so.
0BobTheBob
I agree that underdetermination problems are distinct from problems about norms -from the is/ought problem. Apologies if I introduced confusion in mentioning them. They are relevant because they arise (roughly speaking) at the interface between decision theory and empirical science, ie, where you try to map mere behaviours onto desires and beliefs. My understanding is that in philosophy of science, an underdetermination problem arises when all evidence is consistent with more than one theory or explanation. You have a scientist, a set of facts, and more than one theory which the scientist can fit to the facts. In answer to your initial challenge, the problem is different for human psychology because the underdetermination is not of the scientist's theory but supposedly of one set of facts (facts about beliefs and desires) by another (behaviour and all cognitive states of the agent). That is, in contrast to the basic case, here you have a scientist, one set of facts -about a person's behaviour and cognitive states- a second set of suppposed facts -about the person's beliefs and desires- and the problem is that the former set underdetermine the latter.
0torekp
You seem to be introducing a fact/theory dichotomy. That doesn't seem promising. If we look at successful reductions in the sciences, they can make at least some of our underdetermination problems disappear. Mean kinetic energy of a gas is a more precise notion than "temperature" was in prior centuries, for example. I wouldn't try to wrestle with the concepts of cognitive states and behaviors to resolve underdetermination. Instead, it seems worthwhile to propose candidate bridge laws and see where they get us. I think that Millikan et al may well be onto something.
0BobTheBob
As I understand it, the problem of scientific underdetermination can only be formulated if we make some kind of fact/theory distinction - observation/theory would be better, is that ok with you? I'm not actually seeing how the temperature example is an instance of underdetermination, and I'm a little fuzzy on where bridge laws fit in, but would be open to clarification on these things.
0torekp
Well, scientific underdetermination problems can be formulated with a context-relative observation/theory distinction. But this is compatible with seeing observations as potentially open to contention between different theories (and in that sense "theory-laden"). The question is, are these distinctions robust enough to support your argument? By the way, I'm confused by your use of "cognitive states" in your 08 July comment above, where it is contrasted to beliefs and desires. Did you mean neural states? Temperature was underdetermined in the early stages of science because the nebulae of associated phenomena had not been sorted out. Sometimes different methods of assessing temperature could conflict. E.g., object A might be hotter to the touch than B, yet when both are placed in contact with C, B warms C and A cools it.
0BobTheBob
You are quite right -sorry about the confusion. I meant to say behaviour and computational states -the thought being that we are trying to correlate having a belief or desire to being in some combination of these. I understand you're referring here to the claim -for which I can't take credit- that facts about behaviour underdetermine facts about beliefs and desires. Because the issue -or so I want to argue- is of underdetermination of one set of potential facts (namely, about beliefs and desires) by another (uninterpreted behaviour), rather than of underdetermination of scientific theory by fact or observation, I'm not seeing that the issue of the theory-ladenness of observation ultimately presents a problem. The underdetermination is pretty easy to show, at least on a superficial level. Suppose you observe * a person, X, pluck an apple from a tree and eat it (facts about behaviour). You infer: * X desires that s/he eat an apple, and X believes that if s/he plucks and bites this fruit, s/he will eat an apple. But couldn't one also infer, * X desires that s/he eat a pear, and X believes (mistakenly) that if s/he plucks and bites this fruit, s/he will eat a pear. or * X desires that s/he be healthy, and X believes that if s/he plucks and bites this fruit (whatever the heck it is), s/he will be healthy. You may think that if you observe enough behaviour, you can constrain these possibilities. There are arguments (which I acknowledge I have not given), which show (or so a lot of people think) that this is not the case - the underdetermination keeps pace.
0torekp
Emphasis added. The issue you're pointing to still just looks like a particular case of underdetermination of the more-theoretical by the more-observational (and the associated "problem" of theory-laden observations). Nothing new under the sun here. Just the same old themes, with minor variations, that apply all over science. Thus, no reason to single out psychology for exclusion from naturalistic study. One observer looks at Xanadu and sees that she wanted an apple, and that she was satisfied. Another looks at her and sees only that she plucked an apple, and infers that she wanted it. Another looks and sees a brown patch here, and a red patch there, and infers that these belonged to a human and an apple respectively... Compare: one scientist looks at the bubble chamber and sees two electrons fly through it. Another sees two bubble tracks ... etc.
0BobTheBob
As I tried to explain in my July 08 post, there is a difference. Straight-forward scientific underdetermination: * One observer/scientist * One, unproblematic set of facts (a curved white streak on a film plate exposed in a bubble chamber) * Any number of mutually incompatible scientific theories, each of which adequately explains this and all other facts. All theories completely adequate to all observations. The only puzzle is that there can be more than one theory. (Tempting to imagine two of these might be, say, established particle theory, and Wolfram's New Kind of Science conception of nature. Though presumably they would ultimately make divergent predictions, meaning this is a misleading thought). Underdetermination of psychological facts by naturalistic facts: * One observer/scientist * One, unproblematic set of facts (behaviour and brain states. eg, a person picking an apple, and all associated neurlogical events) * Any number of problematic sets of supposed facts (complete but mutually incompatible assignments of beliefs and desires to the person consistent with her behaviour and brain states) * No (naturalistic) theory which justifies choosing one of the latter sets of facts -that is, justifies an assignment of beliefs and desires to the person. The latter problem is not just an instance of the former. The problem for physics comparable to psychological underdetermination might look like this (ignoring Reality for the moment to make the point): * Scientist observes trace on film plate from cloud chamber experiment. * Scientist's theory is consistent with two different possible explanations (in one explanation it's an electron, in another it's a muon). * No further facts can nail down which explanation is correct, and all facts can anyway be explained, if more pedantically, without appeal to either electrons or muons. That is, both explanations can be reconciled equally well with all possible facts, and neither explanation anyway is u
0torekp
The differences you've identified amount to (A) both explanations can be reconciled equally well with all possible facts, and (B) all facts can anyway be explained without the theoretical posits. But (B) doesn't seem in-principle different from any other scientific theoretical apparatus. Simply operationalize it thoroughly and say "shut up and calculate!" So that leaves (A). I'll admit that this makes a big difference, but it also seems a very tall order. The idea that any given hypothesized set of beliefs and desires is compatible with all possible facts, is not very plausible on its face. Please provide links to the aforementioned arguments to that effect, in the literature.
0BobTheBob
I didn't mean to say this, if I did. The thesis is that there are indefinitely many sets of beliefs and desires compatible with all possible behavioural and other physical facts. And I do admit it seems a tall order. But then again, so does straight-forward scientific underdetermination, it seems to me. Just to be clear, my personal preoccupation is the prescriptive or normative nature of oughts and hence wants and beliefs, which I think is a different problem than the underdetermination problem. The canonical statement comes in Chapter 2 of W.V.O. Quine's Word and Object. Quine focusses on linguistic behaviour, and on the conclusion that there is no unique correct translation manual for interpreting one person's utterances in the idiolect of another (even if they both speak, say, English). The claims about beliefs are a corrollary. Donald Davidson takes up these ideas and relates them specifically to agents' beliefs in a number of places, notably his papers 'Radical Interpretation', 'Belief and the Basis of Meaning', and 'Thought and Talk', all reprinted in his Inquiries into Truth and Interpretation. Hilary Putnam, in his paper 'Models and Reality' (reprinted in his Realism and Reason ), tried to give heft to what (I understand) comes down to Quine's idea by arguing it to be a consequence of the Lowenheim-Skolem theorem of mathematical logic.
1torekp
Timothy Bays has a reply to Putnam's alleged proof sufficient to render the latter indecisive, as far as I can see. The set theory is a challenge for me, though. As for Quine, on the one hand I think he underestimates the kinds of evidence that can bear, and he understates the force of simplicity considerations ("undetached rabbit-parts" could only be loved by a philosopher). But on the other hand, and perhaps more important, he seems right to downplay any remaining "difference" of alternative translations. It's not clear that the choice between workable alternatives is a problem.
0BobTheBob
Thanks for the link to the paper by Timothy Bays. It looks like a worthwhile -if rather challenging- read. I have to acknowledge there's lots to be said in response to Quine and Putnam. I could try to take on the task of defending them, but I suspect your ability to come up with objections would well outpace my ability to come up with responses. People get fed up with philosophers' extravagant thought experiments, I know. I guess Quine's implicit challenge with his "undetached rabbit parts" and so on is to come up with a clear (and, of course, naturalistic) criterion which would show the translation to be wrong. Simplicity considerations, as you suggest, may do it, but I'm not so sure.
0[anonymous]
I appreciate this clarification. The point is meant to be about, as you say, the empirical sciences. I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour - just that these theories aren't straight-forwardly continuous with the theories of natural science. In case you haven't encountered it and may be interested, the underdetermination problem associated with inferring to beliefs and desires from mere behaviour has been considered in some depth by Donald Davidson, eg in his [http://books.google.com/books/about/Essays_on_actions_and_events.html?id=Bj2HHI0c2RIC Essays on Actions and Events]
0[anonymous]
I appreciate this clarification. The point is indeed meant to be about, as you say, the empirical sciences. I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour - just that these theories aren't straight-forwardly continuous with the theories of natural science. In case you haven't encountered it and may be interested, the underdetermination problem associated with inferring to beliefs and desires from mere behaviour (that is, the interface between natural science and the study of rationality) has been considered in some depth by a number of people including notably Donald Davidson, eg in his (Essays on Actions and Events)[http://books.google.com/books/about/Essays_on_actions_and_events.html?id=Bj2HHI0c2RIC].

To a first approximation, a belief is rational just in case you ought to hold it; an action rational just in case you ought to take it

A belief is rational just in case you rational-ought to hold it; an action rational just in case you rational-ought to take it..

Rational-ought beliefs and actions are the ones optimal for achieving your goals. Goals and optimallity can be explained in scientific language. Rational-ought is not moral-ought. Moral-ought is harder to explain because it about the goals an agent should have,not the ones they happen to have.

L

... (read more)
0BobTheBob
I'm sincerely not sure about the rational-ought/moral-ought distinction - I haven't thought enough about it. But anyway, I think moral-ought is a red herring, here. As far as I can see, the claims made in the post apply to rational-oughts. That was certainly the intention. In other posts on LW about fallacies and the details of rational thinking, it's a commonplace to use quite normative language in connection with rationality. Indeed, a primary goal is to help people to think and behave more rationally, because this is seen for each of us to be a good. 'One ought not to procrastinate', 'One ought to compensate for one's biases', etc.. Would love to see the details... :-) I'm not sure I get this. The intention behind drawing the initial distinction between is/ought problems was to make clear the focus is not on, as it were, the mind of the beholder. The question is a less specific variant of the question as to how any mere physical being comes to have intentions (e.g., to buy a lawnmower) in the first place. I agree, but I think it does mean you ought to in a qualified sense. Your merely being in a physical or computational state, however, by itself doesn't, or so the thought goes.
2randallsquared
Rational-oughts are just shortcuts for actions and patterns of actions which are most efficient for reaching your goals. Moral-oughts are about the goals themselves. Determining which actions are most efficient for reaching your goals is entirely naturalistic, and reduces ultimately to statements about what is. Moral-oughts reduce both to what is, and to what satisfies more important goals. The ultimate problem is that there's no way to justify a top-level goal. For a designed mind with a top-level goal, this is not actually a problem, since there's no way to reason one's way to a top-level goal change -- it can be taken as an 'is'. For entities without top-level goals, however, such as humans, this is a serious problem, since it means that there's no ultimate justification for any action at all, only interim justifications, the force of which grows weaker rather than stronger as you climb the reflective tower of goals.
-1Peterdjones
That makes it easier for a designed mind to do rational-ought, but equally harder to do moral-ought.
0randallsquared
It might make it easier for a top-level-goal-haver (TLGH) to choose a rational-ought, since there can be no real conflict, but it doesn't necessarily make it easier to reason about, given such a goal. However, I'd say that it makes it much, much easier to do (what the TLGH sees as) moral-ought, since the TLGH presumably has a concrete top level goal, rather than having to figure it out (or have the illusion of trying to figure it out). The TLGH knows what the morally right thing to do is -- it's hardwired. Figuring out morality is harder when you don't already have a moral arrow preset for you. That isn't to say that we'd agreed that a TLGH has the "correct" arrow of morality, but the TLGH can be completely sure that it does, since that's really what it means to have a top level goal. Any wondering about whether a TLGH did the right thing, by itself, will be rational-ought, not moral-ought. Now, if you meant that it will be harder for it to act like we'd consider a moral entity, then I'd say (again, assuming a top level goal) that it will either do so, or it won't, but it won't be difficult to force itself to do the right thing. This also assumes such a straightforward goal-seeking design is possible for an intelligence. I don't have an opinion on that.
0Peterdjones
Knowledge requires justificaiton. A TLGH that understands epssiemology wouild see itslef as not knowing its TLG, since "it was hardwired into me" is no justification. This applies to humans: we are capable of dounting that our evolutionarily derived moral attitudes arre the correct ones. Evolutionary psychology tells us that our evolutionary history has given us certain moral attitudes and behaviour. So far, so good. Some scientifically minded types take this to constitute a theory of objective morality all in itself. However, that would be subject to the Open Question objection: we can ask of our inherited morality whether it is actually right. (Unrelatedly, we are probably not determined to follow it, since we can overcome strong evolutionary imperatives in, for instance, voluntay celibacy). This not a merely abstract issue either, since EP has been used to support some contentious claims; for instance, that men should be forgiven for adultery since it is "in their genes" to seek muliple partners. And if there is any kind of objective truth about which goals are the true top level goals, that is going to have to come from reasoning. Emipricism fails because there are no perceivable moral facts, and ordinary facts fall into the is-ought divide. Rationality is probably better at removing goals than setting them, better at thous-shalt-nots than thou-shalts That is in line with the liberal-secular view of morality, where it would be strange and maybe even obnoxious for everyone to be pursuing the same aim.
0randallsquared
This only applies to humans because we are not TLGHs. Beliefs and goals require justification because we might change them. Beliefs and goals which are hardwired do not require justification; they must be taken as given. As far as I'm aware, humans only ever have beliefs or goals that seem hardwired in this sense in the case of damage, like people with Capgras delusion. In fact, I would argue that we can only genuinely ask if our "inherited morality" is right because we are not determined to follow it.
0Peterdjones
I said knowledge requires justification. I was appealing to the standard True Justified Belief theory of knowledge. That belief per se does not need justification is not relevant.
0randallsquared
So, So, it's no justification in this technical sense, and it might cheerfully agree that it doesn't "know" its TLG in this sense, but that's completely aside from the 100% certainty with which it holds it, a certainty which can be utterly unshakable by reason or argument. I misunderstood what you were saying due to "justification" being a technical term, here. :)
0Peterdjones
It's been sketched out several times already, by various people. 1.You have a set of goals (aposteriori "is") 1. You have a set of strategies for achieving goals with varying levels of efficiency (aposteriori "is") 2. Being rational is applying rationality to achieve goals optimally (analytical "is"), ie if you are want to be rational, you ought to optimise your UF. Of course that isnt pure empiricism (what is?) because 3 is a sort of conceptual analysis of "oughtness". I am not bothered about that for a number of reasons: I am not commited to the insolubility of the is/ought gap, nor to the non existence of objective ethics. I don't see why the etiology of intentions should pose any more of a problem than the representation of intentions. You can build robots that seek out light sources. "seek light sources" is represented in its programming. It came from the progammer. Where's the problem? But the qualified sense is easily explained as goal+strategy. You rational-ought to adopt strategies to achieve your goals. Concrete facts about my goals and siituation, and abstract facts about which strategies achieve which goals are allt hat is needed to establish truths about rational-ought. What is unnaturalistic about that? The abstract facts about how strageties may be unnaturualisable in a sense, but it is a rather unimpactive sense. Abstract reasoning in general isn't (at least usefully) reducible to atoms, but that doesnt mean it is "about" some non physical realm. In a sense it isn't about anything, It just operates on its own level.

Consider then a virus particle ... Surely there is nothing in biochemistry, genetics or other science which implies there is anything our very particle ought to do. It's true that we may think of it as having the goal to replicate itself, and consider it to have made a mistake if it replicates itself inaccurately, but these conceptions do not issue from science. Any sense in which it ought to do something, or is wrong or mistaken in acting in a given way, is surely purely metaphorical (no?).

No. The distinction between those viral behaviors that tend to... (read more)

5[anonymous]
I'm having trouble with the word "metaphysical". In order for me to make sense of the claim that "mistake" and "exothermic" do not have prior metaphysical meanings, I would like to see some examples of words that do have prior metaphysical meanings, so that I can try to figure out from contrasting examples of having and not having prior metaphysical meanings what it means to have a prior metaphysical meaning. Because at the moment I don't know what you're talking about.
0Perplexed
Hmmm. I may be using "metaphysical" inappropriately here. I confess that I am currently reading something that uses "metaphysical" as a general term of deprecation, so some of that may have worn off. :) Let me try to answer your excellent question by analogy to geometry, without abandoning "metaphysical". As is well known, in geometry, many technical terms are given definitions, but it is impossible to define every technical term. Some terms (point, line, and on are examples) are left undefined, though their meanings is supplied implicitly by way of axioms. Undefined terms in mathematics correspond (in this analogy) to words with prior metaphysical meaning in philosophical discourse. You can't define them, because their meaning is somehow "built in". To give a rather trivial example, when trying to generate a naturalistic definition of ought, we usually assume we have a prior metaphysical meaning for is. Hope that helped.
1Peterdjones
That doesn't work. It would mean conformists are always in the right, irrespective of what they are conforming to.
0Perplexed
As you may have noticed, that definition was labeled as a "first attempt". It captures some of our intuitions about morality, but not all. In particular, its biggest weakness is that it fails to satisfy moral realists for precisely the reason you point out. I have a second quill in my quiver. But before using it, I'm going to split the concept of morality into two pieces. One piece is called "de facto morality". I claim that the definition I provided in the grandparent is a proper reductionist definition of de facto morality and captures many of (some) people's intuitions about morality. The second piece is called "ideal morality". This piece is essentially what de facto morality ought to be. So, your conformist may well be automatically in the right with respect to de facto morality. But it is possible for a moral reformer to point out that he and all of his fellows are in the wrong with respect to ideal morality. That is, the reformer claims that the society would be better off if its de facto conventions were amended from their present unsatisfactory status to become more like the ideal. And, I claim, given the right definition of "society would be better off", this "ideal morality" can be given an objective and naturalistic definition. For more details, see Binmore - Game Theory and the Social Contract
0AdeleneDawner
Not exactly. It means that conformists are never morally wrong, unless some group (probably one that they're not conforming with) punishes them for conforming. They can be morally neutral when conforming, and may be rationally wrong at the same time.
0timtyler
The main trick seems to be getting people to agree on a definition. For instance this:. ...aims rather low. That just tells people to do what they would do anyway. Part of the social function of morality is to give people an ideal to personally aim towards. Another part of the social function of morality is to provide people with an ideal form of behaviour, in order to manipulate others into behaving "better". Another part of the social function of morality is to allow people to signal their goodness by broadcasting their moral code. Done right, that makes them seem more trustworthy and predictable. Your proposal does not score very well on these fronts.
0torekp
I think this is right, except possibly for the part about no prior metaphysical meaning. The later explanation of that part didn't clarify it for me. Instead, I'll just indicate what prior meaning I find attached to the idea that "the virus replicated wrongly." In biology, the idea that organs and behaviors and so on have functions is quite common and useful. The novice medical student can make many correct inferences about the heart by supposing that its function is to pump blood, for example. The idea preceded Darwin, but post-Darwin, we can give a proper naturalistic reduction for it. Roughly speaking, an organ's function is F iff in the ancestral environment, the organ's performance of F is what it was selected for. Various RNA features in a virus might have functions in this sense, and if so, that gives the meaning of saying that in a particular case, the viral reproduction mechanism failed to operate correctly. That's not a moral norm. It's not even the kind of norm relating to an agent's interests, in my view. But it is a norm. There was a pre-existing meaning of "biological function" before Darwin came around. So, a Darwinian definition of biological function was not a purely stipulative one. It succeeded only because it captured enough of the tentatively or firmly accepted notions about "biological function" to make reasonably good sense of all that.
0Perplexed
I think I see the source of the difficulty now. My fault. BobTheBob mentioned the mistake of replicating with errors. I took this to be just one example of a possible mistake by a virus, and thought of several more - inserting into the wrong species of host, for example, or perhaps incorporating an instance of the wrong peptide into the viral shell after replicating the viral genome. I then sought to define 'mistake' to capture the common fitness-lowering feature of all these possible mistakes. However, I did not make clear what I was doing and my readers naturally thought I was still dealing with a replication error as the only kind of mistake. Sorry to have caused this confusion.
0BobTheBob
If I bet higher than 1/6th on a fair die's rolling 6 because in the last ten rolls 6 hasn't come up -meaning it's now 'due'- I make a mistake. I commit an error of reasoning; I do something wrong; I act in a manner I ought not to. What about the virus particle which, in the course of sloshing about in an appropriate medium, participates in the coming into existence of a particle composed of RNA which, as it happens, is mostly identical but differs from itself in a few places. Are you saying that this particle makes a mistake in the same sense of 'mistake' as I do in making my bet? Option (1): The sense is precisely the same (and it is unproblematically naturalistic). In this case I have to ask what the principles are by which one infers to conclusions about a virus's mistakes from facts about replication. What are the physical laws, how are their consequences (the consequences, again, being claims about what a virus ought to do) measured or verified, and so on? Option (2): The senses are different. This was the point of calling the RNA mistake metaphorical. It was to convey that the sense is importantly different than it is in the betting case. The idea is that the sense, if any, in which a virus makes a 'mistake' in giving rise to a non-exact replica of itself is not enough to sustain the kind of norms required for rationality. It is not enough to sustain the conclusions about my betting behaviour. Is this fair?
2Perplexed
Not really. You started by making an argument that listed a series of stages (virus, bacterium, nematode, man) and claimed that at no stage along the way (before the last) were any kind of normative concepts applicable. Then, when I suggested the standard evolutionary explanation for the illusion of teleology in nature, you shifted the playing field. In option 1, you demand that I supply standard scientific expositions of the natural history of your chosen biological examples. In option 2 you suggest that you were just kidding in even mentioning viruses, bacteria and nematodes. Unless an organism has the cognitive equipment to make mistakes in probability theory, you simply are not interested in speaking about it normatively. Do I understand that you are claiming that humans are qualitatively exceptional in the animal kingdom because the word "ought" is uniquely applicable to humans? If so, let me suggest a parallel sequence to the one you suggested starting from viruses. Zygote, blastula, fetus, infant, toddler, teenager, adult. Do you believe it is possible to tell a teenager what she "ought" to do? At what stage in development do normative judgements become applicable. Here is a cite for sorites. Couldn't resist the pun.
0BobTheBob
I appreciate your efforts to spell things out. I have to say I'm getting confused, though I meant to say that at no stage -including the last!- does the addition of merely naturalistic properties turn a thing into something subject to norms -something of which it is right to say it ought, for its own sake, to do this or that. I also said that the sense of right and wrong and of purpose which biology provides is merely metaphorical. When you talk about "the illusion of teleology in nature", that's exactly what I was getting at (or so it seems to me). That is, teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not - it's real. Can you live with this? I think a lot of people are apt to think that illusory teleology sort of fades into the real thing with increasing physical complexity. I see the pull of this idea, but I think it's mistaken, and I hope I've at least suggested that adherents of the view have some burden to try to defend it. Now that is a whole other can of worms... This is a fair and a difficult question. Roughly, another individual becomes suitable for normative appraisal when and to the extent that s/he becomes a recognizably rational agent -ie, capable of thinking and acting for her/himself and contributing to society (again, very roughly). All kinds of interesting moral issues lurk here, but I don't think we have to jump to any conclusions about them. In case I'm giving the wrong impression, I don't mean to be implying that people are bound by norms in virtue of possessing some special aura or other spookiness. I'm not giving a theory of the nature of norms - that's just too hard. All I'm saying for the moment is that if you stick to purely natural science, you won't find a place for them.
1timtyler
The usual trick is to just call it teleonomy. Teleonomy is teleology with smart pants on.
0BobTheBob
Thanks for this - I hadn't encountered this concept. Looks very useful.
0timtyler
Similar is the Dawkins distinction between designed and designoid objects. Personally I was OK with "teleonomy" and "designed". Biologists get pushed into this sort of thing by the literal-minded nit-pickers.
0Perplexed
No, I cannot. It presumes (or is it argues?) that human rationality is not part of nature. My apologies for using the phrase "illusion of teleology in nature". It seems to have created confusion. Tabooing that use of the word "teleology", what I really meant was the illusion that living things were fashioned by some rational agent for some purpose of that agent. Tabooing your use of the word, on the other hand, in your phrase "the kind of teleology needed to make sense of rationality" leads elsewhere. I would taboo and translate that use to yield something like "To make sense of rationality in an agent, one needs to accept/assume/stipulate that the agent sometimes acts with a purpose in mind. We need to understand 'purpose', in that sense, to understand rationality." Now if this is what you mean, then I agree with you. But I think I understand this kind of purpose, identifying it as the cognitive version of something like "being instrumental to survival and reproduction". That is, it is possible for an outside observer to point to behaviors or features of a virus that are instrumental to viral survival and reproduction. At the level of a bacterium, there are second-messenger chemicals that symbolize or represent situations that are instrumental to survival and reproduction. At the level of the nematode, there are neuron firings serving as symbols. At the level of a human the symbols can be vocalizations: "I'm horny; how about you?". I don't see anything transcendently new at any stage in this progression, nor in the developmental progression that I offered as a substitute. Let me try putting that in different words: "Norms are in the eye of the beholder. Natural science tries to be objective - to avoid observer effects. But that is not possible when studying rationality. It requires a different, non-reductionist and observer dependent way of looking at the subject matter." If that is what you are saying, I may come close to agreeing with you. But somehow, I don't
0BobTheBob
Thanks, yes. This is very clear. I can buy this. Sorry if I'm slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They're the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are 'merely' teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order. Here's one more crack at trying to motivate this, using very evidently non-scientific terms. On the one hand, I submit that you cannot make sense of a thing (human, animal, AI, whatever) as rational unless there is something that it cares about. Unless that is, there is something which matters or is important to it (this something can be as simple as survival or reproduction). You may not like to see a respectable concept like rationality consorting with such waffly notions, but there you have it. Please object to this if you think it's false. On the other hand, nothing in nature implies that anything matters (etc) to a thing. You can show me all of the behavioural/cognitive correlates of X's mattering to a thing, or of a thing's caring about X, and provide me detailed evolutionary explanations of the behavioural correlates' presence, but these correlates simply do not add up to the thing's actually caring about X. X's being important to a thing, X's mattering, is more than a question of mere behaviour or computation. Again, if this seems false, please say. If both hands seem false, I'd be interested to hear that, too. As soon as we start to talk about symbols and representation, I'm concerned that a whole new set of very thorny issues get introduced. I will shy away from these. "It requires a different, non-reductionist ... way
1timtyler
Humans have brains, and can better represent future goal states. However, "purpose" in nature ultimately comes from an optimisation algorithm. That is usually differential reproductive success. Human brains run their own optimisation algorithm - but it was built by and reflects the goals of the reproducers that built it. I would be reluctant to dis bacterial purposes. They are trying to steer the future too - it is just that they are not so good at it.
0BobTheBob
You use a fair bit of normative, teleological vocabulary, here: 'purpose', 'goal', 'success', 'optimisation', 'trying', being 'good' at 'steering' the future. I understand your point is that these terms can all be cashed out in unproblematic, teleonomic terms, and that this is more or less an end of the matter. Nothing dubious going on here. Is it fair to say, though, that this does not really engage my point, which is that such teleonomic substitutes are insufficient to make sense of rationality? To make sense of rationality, we need claims such as, * One ought to rank probabilities of events in accordance with the dictates of probability theory (or some more elegant statement to the effect). If you translate this statement, substituting for 'ought' the details of the teleonomic 'ersatz' correlate, you get a very complicated statement about what one likely will do in different circumstances, and possibly about one's ancestor's behaviours and their relation to those ancestors' survival chances (all with no norms). This latter complicated statement will not mean what the first statement means, and won't do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what's needed is a prescription. Probably none of this should matter to someone doing biology, or for that matter decision theory. But if you want to go beyond and commit to a doctrine like naturalism or physical reductionism, then I submit this does become relevant.
0Peterdjones
Do you accept that a description of what an ideal agent does is equivalent to a prescription of what a non-ideal agent (of the same goals) should do?
0BobTheBob
This is a nice way of putting things. As long as we're clear that what makes it a prescription is the fact that it is an ideal for the non-ideal agent. Do you think this helps the cause of naturalism?
-1Peterdjones
Yes.Well, it helps with my crusade to show that objective morality can be based on pure reason (abstract reasoning is rather apt for dealing with ideals; it is much easier to reason about a perfect circle than a wobbly, hand-drawn one).
0Peterdjones
What is missing? A quale?
0Perplexed
My position is that, to the extent that the notion of purpose is at all spooky, that spookiness was already present in a virus. The profound part of teleology is already there in teleonomy. Which is not so say that humans are different from viruses only in degree. The are different in quality with regard to some other issues involved in rationality. Cognitive issues. Symbol processing issues. Issues of intentionality. But not issues of pure purpose and telos. So why don't you and I just shy away from this conversation. We've both stated our positions with sufficient clarity, I think.

The foundational problem in your thesis is that you have grounded "rationality" as a normative "ought" on beliefs or actions. I dispute that assertion.

Rationality is more reasonably grounded as selecting actions so as to satisfy your explicit or implicit desires. There is no normative force to statements of the form "action X is not rational", unpacked as "If your values fall into {large set of human-like values}, then action X is not optimal, choosing for all similar situations where the algorithm you use is run".

Th... (read more)

0TrE
Compare What Do We Mean By Rationality.

| The foregoing thoughts are hardly original. David Hume is famous for having observed that ought cannot be derived from is

See also the article From 'Is' to 'Ought', by Douglas_Reay

You are suggesting that "want" implies "ought". One way to interpret this is as a sort of moral relativism - that if I want to punch babies, then I ought to punch babies. With this claim I would disagree strenuously.

But I'm guessing that this is not how you mean it. I'm not sure exactly what concepts of "want" and "ought" you are using.

One point you are making is that we can discuss "ought" in the context of any utility function. You do not seem to be making any claims about what utility function we should choose. You are instead arguing that there is no naturalistic source telling us what utility function to choose. You presumably wish us to pick one and discuss it?

0BobTheBob
You're right that I didn't mean this necessarily to be about a specifically moral sense of 'ought'. As for the suggested inference about baby-punching, I would push that onto the 'other things being equal' clause, which covers a multitude of sins. No acceptable theory can entail that one ought to be punching babies, I agree. The picture I want to suggest should be taken seriously is that on one side of the fence are naturalistic properties, on the other are properties such as rationality, having wants/beliefs, being bound by norms as to right and wrong (where this can be as meagre a sense of right and wrong as "it is right to predicate 'cat' of this animal and wrong to predicate 'dog' of it" -or, "it is right to eat this round, red thing if you desire an apple, it is wrong to eat it if you desire a pear"), oughts, goals, values, and so on. And that the fence isn't as trivial to break down as one might think. I'm understanding a utility function is something like a mapping of states of affairs (possible worlds?) onto, say, real numbers. In this context, the question would be giving naturalistic sense to the notion of value -that is, of something's being better or worse for the agent in question- which the numbers here are meant to correlate to. It's the notion of some states of affairs being more or less optimal for the agent -which I think is part of the concept of utility function- which I want to argue is outside the scope of natural science. Please correct me on utility functions if I've got the wrong end of the stick. To be clear - the intent isn't to attack the idea that there can be interesting and fruitful theories involving utility functions, rationality and related notions. It's just that these aren't consistent with a certain view of science and facthood.
2Will_Sawin
I guess the main thing I want to suggest is that there is more than one fence that is hard/impossible to breach. Furthermore that depending on how you define certain terms, some of the fences may or may not be breachable. I'm also saying that non-natural "facts" are as easy to work with as natural facts. The issue that they're not part of natural science doesn't impact our ability to discuss them productively.
0BobTheBob
I agree entirely with this. This exercise isn't meant in any way to be an attack on decision theory or the likes. The target is so-called naturalism - the view that all facts are natural facts.
0Will_Sawin
I see. That makes sense.