Comment author: lukeprog 03 February 2011 04:14:38PM 1 point [-]

Categorical oughts and reasons have always confused me. What do you see as the difference, and which type of each are you thinking of? The types of categorical reasons or reasons with which I'm most familiar are Kant's and Korsgaard's.

Comment author: utilitymonster 03 February 2011 05:41:22PM 0 points [-]

R is a categorical reason for S to do A iff R counts in favor doing A for S, and would so count for other agents in a similar situation, regardless of their preferences. If it were true that we always have reasons to benefit others, regardless of what we care about, that would be a categorical reason. I don't use the term "categorical reason" any differently than "external reason".

S categorically ought to do A just when S ought to do A, regardless of what S cares about, and it would still be true that S ought to do A in similar situations, regardless of what S cares about. The rule: always maximize happiness, would, if true, ground a categorical ought.

I see very little reason to be more or less skeptical of categorical reasons or categorical oughts than the other.

Comment author: lukeprog 03 February 2011 03:25:37PM *  1 point [-]

utilitymonster,

For the record, as a good old Humean I'm currently an internalist about reasons, which leaves me unable (I think) to endorse any form of utilitarianism, where utilitarianism is the view that we ought to maximize X. Why? Because internal reasons don't always, and perhaps rarely, support maximizing X, and I don't think external reasons for maximizing X exist. For example, I don't think X has intrinsic value (in Korsgaard's sense of "intrinsic value").

Thanks for the link to that paper on rational choice theories and decision theories!

Comment author: utilitymonster 03 February 2011 03:37:00PM 0 points [-]

So are categorical reasons any worse off than categorical oughts?

Comment author: lukeprog 03 February 2011 03:43:56AM 8 points [-]

Eliezer,

I think the reason you're having trouble with the standard philosophical category of "reasons for action" is because you have the admirable quality of being confused by that which is confused. I think the "reasons for action" category is confused. At least, the only action-guiding norm I can make sense of is desire/preference/motive (let's call it motive). I should eat the ice cream because I have a motive to eat the ice cream. I should exercise more because I have many motives that will be fulfilled if I exercise. And so on. All this stuff about categorical imperatives or divine commands or intrinsic value just confuses things.

How would a computer program enumerate all motives (which according to me, is co-exensional with "all reasons for action")? It would have to roll up its sleeves and do science. As it expands across the galaxy, perhaps encountering other creatures, it could do some behavioral psychology and neuroscience on these creatures to decode their intentional action systems (as it had done already with us), and thereby enumerate all the motives it encounters in the universe, their strengths, the relations between them, and so on.

But really, I'm not yet proposing a solution. What I've described above doesn't even reflect my own meta-ethics. It's just an example. I'm merely raising questions that need to be considered very carefully.

And of course I'm not the only one to do so. Others have raised concerns about CEV and its underlying meta-ethical assumptions. Will Newsome raised some common worries about CEV and proposed computational axiology instead. Tarleton's 2010 paper compares CEV to an alternative proposed by Wallach & Collin.

The philosophical foundations of the Friendly AI project need more philosophical examination, I think. Perhaps you are very confident about your meta-ethical views and about CEV; I don't know. But I'm not confident about them. And as you say, we've only got one shot at this. We need to make sure we get it right. Right?

Comment author: utilitymonster 03 February 2011 03:02:23PM *  1 point [-]

I can see that you might question the usefulness of the notion of a "reason for action" as something over and above the notion of "ought", but I don't see a better case for thinking that "reason for action" is confused.

The main worry here seems to have to do with categorical reasons for action. Diagnostic question: are these more troubling/confused than categorical "ought" statements? If so, why?

Perhaps I should note that philosophers talking this way make a distinction between "motivating reasons" and "normative reasons". A normative reason to do A is a good reason to do A, something that would help explain why you ought to do A, or something that counts in favor of doing A. A motivating reason just helps explain why someone did, in fact, do A. One of my motivating reasons for killing my mother might be to prevent her from being happy. By saying this, I do not suggest that this is a normative reason to kill my mother. It could also be that R would be a normative reason for me to A, but R does not motivate my to do A. (ata seems to assume otherwise, since ata is getting caught up with who these considerations would motivate. Whether reasons could work like this is a matter of philosophical controversy. Saying this more for others than you, Luke.)

Back to the main point, I am puzzled largely because the most natural ways of getting categorical oughts can get you categorical reasons. Example: simple total utilitarianism. On this view, R is a reason to do A if R is the fact that doing A would cause someone's well-being to increase. The strength of R is the extent to which that person's well-being increases. One weighs one's reasons by adding up all of their strengths. On then does the thing that one has most reason to do. (It's pretty clear in this case that the notion of a reason plays an inessential role in the theory. We can get by just fine with well-being, ought, causal notions, and addition.)

Utilitarianism, as always, is a simple case. But it seems like many categorical oughts can be thought of as being determined by weighing factors that count in favor of and count against the course of action in question. In these cases, we should be able to do something like what we did for util (though sometimes that method of weighing the reasons will be different/more complicated; in some bad cases, this might make the detour through reasons pointless).

The reasons framework seems a bit more natural in non-consequentialist cases. Imagine I try to maximize aggregate well-being, but I hate lying to do it. I might count the fact that an action would involve lying as a reason not to do it, but not believe that my lying makes the world worse. To get oughts out of a utility function instead, you might model my utility function as the result of adding up aggregate well-being and subtracting a factor that scales with the number of lies I would have to tell if I took the action in question. Again, it's pretty clear that you don't HAVE to think about things this way, but it is far from clear that this is confused/incoherent.

Perhaps the LW crowd is perplexed because people here take utility functions as primitive, whereas philosophers talking this way tend to take reasons as primitive and derive ought statements (and, on a very lucky day, utility functions) from them. This paper, which tries to help reasons folks and utility function folks understand/communicate with each other, might be helpful for anyone who cares much about this. My impression is that we clearly need utility functions, but don't necessarily need the reason talk. The main advantage to getting up on the reason talk would be trying to understand philosophers who talk that way, if that's important to you. (Much of the recent work in meta-ethics relies heavily on the notion of a normative reason, as I'm sure Luke knows.)

Comment author: jimrandomh 02 February 2011 10:21:41PM 3 points [-]

This is either horribly confusing, or horribly confused. I think that what's going on here is that you (or the sources you're getting this from) have taken a bundle of incompatible moral theories, identified a role that each of them has a part playing, and generalized a term from one of those theories inappropriately.

The same thing can be a reason for action, a reason for inaction, a reason for belief and a reason for disbelief all at once, in different contexts depending on what consequences these things will have. This makes me think that "reason for action" does not carve reality, or morality, at the joints.

Comment author: utilitymonster 03 February 2011 02:31:22PM *  0 points [-]

I'm sort of surprised by how people are taking the notion of "reason for action". Isn't this a familiar process when making a decision?

  1. For all courses of action you're thinking of taking, identify the features (consequences if you that's you think about things) that count in favor of taking that course of action and those that count against it.

  2. Consider how those considerations weigh against each other. (Do the pros outweigh the cons, by how much, etc.)

  3. Then choose the thing that does best in this weighing process.

The same thing can be a reason for action, a reason for inaction, a reason for belief and a reason for disbelief all at once, in different contexts depending on what consequences these things will have. This makes me think that "reason for action" does not carve reality, or morality, at the joints.

It is not a presupposition of the people talking this way that if R is a reason to do A in a context C, then R is a reason to do in all contexts.

The people talking this way also understand that a single R might be both a reason to do A and a reason to believe X at the same time. You could also have R be a reason to believe X and a reason to cause yourself to not believe X. Why do you think these things make the discourse incoherent/non-perspicuous? This seems no more puzzling than the familiar fact that believing a certain thing could be epistemically irrational but prudentially rational to (cause yourself) to believe.

Comment author: lukeprog 01 February 2011 06:18:25PM 1 point [-]

Yes, that's a claim that in my experience, most philosophers disagree with. It's one I'll need to argue for. But I do think one's meta-ethical views have large implications for one's normative views that are often missed.

Comment author: utilitymonster 01 February 2011 09:38:59PM 1 point [-]

Even if we grant that one's meta-ethical position will determine one's normative theory (which is very contentious), one would like some evidence that it would be easier to find the correct meta-ethical view than it would be to find the correct (or appropriate, or whatever) normative ethical view. Otherwise, why not just do normative ethics?

Comment author: RichardChappell 30 January 2011 08:40:28PM *  40 points [-]

Eliezer's metaethics might be clarified in terms of the distinctions between sense, reference, and reference-fixing descriptions. I take it that Eliezer wants to use 'right' as a rigid designator to denote some particular set of terminal values, but this reference fact is fixed by means of a seemingly 'relative' procedure (namely, whatever terminal values the speaker happens to hold, on some appropriate [if somewhat mysterious] idealization). Confusions arise when people mistakenly read this metasemantic subjectivism into the first-order semantics or meaning of 'right'.

In summary:

(i) 'Right' means, roughly, 'promotes external goods X, Y and Z'

(ii) claim i above is true because I desire X, Y, and Z.

Note that Speakers Use Their Actual Language, so murder would still be wrong even if I had the desires of a serial killer. But if I had those violent terminal values, I would speak a slightly different language than I do right now, so that when KillerRichard asserts "Murder is right!" what he says is true. We don't really disagree, but are instead merely talking past each other.

Virtues of the theory:

(a) By rigidifying on our actual, current desires (or idealizations thereupon), it avoids Inducing Desire Satisfactions.

(b) Shifting the subjectivity out to the metasemantic level leaves us with a first-order semantic proposal that at least does a better job than simple subjectivism at 'saving the phenomena'. (It has echoes of Mark Schroeder's desire-based view of reasons, according to which the facts that give us reasons are the propositional contents of our desires, rather than the desires themselves. Or something like that.)

(c) It's naturalistic, if you find moral non-naturalism 'spooky'. (Though I'd sooner recommend Mackie-style error theory for naturalists, since I don't think (b) above is enough to save the phenomena.)

Objections

(1) It's incompatible with the datum that substantive, fundamental normative disagreement is in fact possible. People may share the concept of a normative reason, even if they fundamentally disagree about which features of actions are the ones that give us reasons.

(2) The semantic tricks merely shift the lump under the rug, they don't get rid of it. Standard worries about relativism re-emerge, e.g. an agent can know a priori that their own fundamental values are right, given how the meaning of the word 'right' is determined. This kind of (even merely 'fundamental') infallibility seems implausible.

(3) Just as simple subjectivism is an implausible theory of what 'right' means, so Eliezer's meta-semantic subjectivism is an implausible theory of why 'right' means promoting external goods X, Y, Z. An adequately objective metaethics shouldn't even give preferences a reference-fixing role.

Comment author: utilitymonster 31 January 2011 01:39:04PM 5 points [-]

Yes, this is what I thought EY's theory was. EY? Is this your view?

In response to Rational Repentance
Comment author: utilitymonster 16 January 2011 03:50:10PM 0 points [-]

On the symbolic action point, you can try making the symbolic action into a public commitment. Research suggests this will increase the strength of the effect you're talking about. Of course, this could also make you overcommit, so this strategy should be used carefully.

Comment author: Vladimir_Nesov 01 March 2010 08:19:35PM *  2 points [-]

I agree, if you construct this upload-aggregate and manage to ban other uses for the tech. This was reflected in the next sentence of my comment (maybe not too clearly):

To apply uploads specifically to FAI as opposed to generation of more existential risk, they have to be closely managed, which may be very hard to impossible once the tech gets out.

Comment author: utilitymonster 13 December 2010 05:30:00AM *  2 points [-]

Especially if WBE comes late (so there is a big hardware overhang), you wouldn't need a lot of time to spend loads of subjective years designing FAI. A small lead time could be enough. Of course, you'd have to be first and have significant influence on the project.

Edited for spelling.

Comment author: Jordan 09 December 2010 07:17:24AM 7 points [-]

An important academic option: get tenure at a less reputable school. In the States at least there are tons of universities that don't really have huge research responsibilities (so you won't need to worry about pushing out worthless papers, preparing for conferences, peer reviewing, etc), and also don't have huge teaching loads. Once you get tenure you can cruise while focusing on research you think matters.

The down side is that you won't be able to network quite as effectively as if you were at a more prestigious university and the pay isn't quite as good.

Comment author: utilitymonster 09 December 2010 01:49:13PM 2 points [-]

Don't forget about the ridiculous levels of teaching you're responsible for in that situation. Lots worse than at an elite institution.

In response to Efficient Charity
Comment author: utilitymonster 04 December 2010 08:06:16PM 1 point [-]

I thought this was really, really good.

View more: Prev | Next