Esar comments on [Poll] Less Wrong and Mainstream Philosophy: How Different are We? - Less Wrong

38 Post author: Jayson_Virissimo 26 September 2012 12:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (627)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 26 September 2012 05:17:40PM 0 points [-]

Is that possible? Can you both think a) that one should in general act so as to maximise happiness/utility/whatever, and b) there are no general moral rules?

I think that's a contradiction.

Comment author: pragmatist 26 September 2012 05:59:32PM *  3 points [-]

Consequentialism doesn't require a commitment to maximization of any particular variable. It's the claim that only the consequences of actions are relevant to moral evaluation of the actions. I think that's a weak enough claim that you can't really call it a general moral principle. So one could believe that only consequences are morally relevant, but the way in which one evaluates actions based on their consequences does not conform to any general principle.

If Luke had said that he's a utilitarian who is also a particularist, that would have been a contradiction.

Comment author: [deleted] 26 September 2012 06:19:04PM 1 point [-]

I think that's a weak enough claim that you can't really call it a general moral principle.

That's a good point. So I should take from Luke's claim that he does not believe one should (as a moral rule) maximise expected utility, or anything like that? And that he would say that it's possible (if perhaps unlikely) for an action to be good even if it minimizes expected utility?

Comment author: pragmatist 27 September 2012 06:52:27AM 0 points [-]

So I should take from Luke's claim that he does not believe one should (as a moral rule) maximise expected utility, or anything like that?

I probably shouldn't speak for Luke, but I'm guessing the answer to this is yes. If it isn't, then I don't understand how he's a particularist.

And that he would say that it's possible (if perhaps unlikely) for an action to be good even if it minimizes expected utility?

I don't see why he should be committed to this claim.

Comment author: Manfred 27 September 2012 08:09:51AM *  0 points [-]

I took it to mean that Luke is requiring an agent to be at least somewhat consequentialist before he even thinks of it in terms of a morality.