RobbBB comments on P/S/A - Sam Harris offering money for a little good philosophy - Less Wrong

10 Post author: Benito 01 September 2013 06:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobbBB 05 September 2013 06:47:59PM *  1 point [-]

The error with Harris' main point is hard to pin down, because it seems to me that his main fault is that his beliefs regarding morality aren't clearly worked out in his own head.

I think his beliefs are worked out and make sense, but aren't articulated well. What he's really doing is trying to replace morality-speak with a new, slightly different and more homogeneous way of speaking in order to facilitate scientific research (i.e., a very loose operationalization) and political cooperation (i.e., a common language).

But, I gather, he can't emphasize that point because then he'll start sounding like a moral anti-realist, and even appearing to endorse anything in the neighborhood of relativism will reliably explode most people's brains. (The realists will panic and worry we have to stop locking up rapists if we lose their favorite Moral System. The relativists will declare victory and take this metaphysical footnote as a vindication of their sloppy, reflectively inconsistent normative talk.)

The problem is that Harris' main position can also be taken to mean that science can determine what preferences people ought to have in the first place, which is not possible as this is circular, and this is the main source of criticism he receives. Unfortunately Harris does not seem to get this as he never addresses the issue

This is not true. He recognizes this point repeatedly in the book and in follow-ups, and his response is simply that it doesn't matter. He's never claimed to have a self-justifying system, nor does he take it to be a particularly good argument against disciplines that can't achieve the inconsistent goal of non-circularly justifying themselves.

Check out his response to critics. That should clarify a lot.

In an example of super-intelligent aliens for example, he states that it is "obviously" right for us to let them eat us if this will increase total utility. This implies that everyone should feel compelled to maximise total utility, though he supplies no argument as to why this should be the case.

What do you mean by 'utility' here? If 'utility' is just a measure of how much something satisfies our values, then the obviousness seems a lot less mysterious.

I suspect that a winning letter to Sam Harris would interpret his position favourably, agree with him on most points, and then raise a compelling new point that he has not yet thought of that causes him to change his mind slightly but which does not address the core of his problem.

Yeah, I plan to do basically that. (Not just as a tactic, though. I do agree with him on most of his points, and I do disagree with him on a specific just-barely-core issue.)

Comment author: [deleted] 12 September 2013 05:54:13PM 0 points [-]

If you're willing to satisfy my curiosity, what's that specific issue? Would an argument falsifying his position on that issue amount to a refutation of the central argument of the book? If not, wouldn't your essay just be ineligible?

Comment author: RobbBB 12 September 2013 06:41:02PM 0 points [-]

The issue I have in mind wasn't explicitly cited in the canonical summary he gives in the FAQ, but I asked Sam personally and he said the issue qualifies as 'central'. I can give you more details in February. :)

Comment author: Sophronius 13 September 2013 12:23:53PM 0 points [-]

I did read his response to critics in addition to skimming through his book. As far as I remember his position really does seem vague and inconsistent, and he never addresses things like the supposed is-ought problem properly. He just handwaves it by saying it does not matter, as you point out, but this is not what I would call addressing it properly.

Utility always means satisfying preferences, as far as I know. The reason his answer is not obvious is that it assumes that what is desirable for the aliens must necessarily be desirable for us. In other words, it assumes a universal morality rather than a merely "objective" one (he assumes a universally compelling moral argument, to put it in less wrong terms). My greatest frustration in discussing morality is that people always confuse the ability to handle a moral issue objectively with being able to create a moral imperative that applies to everyone, and Harris seems guilty of this as well here.

Comment author: RobbBB 13 September 2013 05:59:49PM *  0 points [-]

he never addresses things like the supposed is-ought problem properly. He just handwaves it by saying it does not matter, as you point out, but this is not what I would call addressing it properly.

I don't know. What more is there to say about it? It's a special case of the fact that for any sets of sentences P and Q, P cannot be derived from Q if P contains non-logical predicates that are absent from Q and we have no definition of those predicates in terms of Q-sentences. All non-logical words work in the same way, in that respect.

The interesting question isn't Hume's is/ought distinction, since it's just one of a billion other distinctions of the same sort, e.g., the penguin/economics distinction, and the electron/bacon distinction. Rather, the interesting question is Moore's Open Question argument, which is an entirely distinct point and can be adequately answered by: 'Insofar as this claim about the semantics of 'morality' is right, it seems likely that an error theory of morality is correct; and insofar as it is usefully true to construct normative language that is reducible to descriptions, we will end up with a language that does not yield an Open Question in explaining why that is what's 'moral' rather than something else.

I agree Harris should say that somewhere clearly. But this is all almost certainly true given his views; he just apparently isn't interested in hashing it out. TML is a book on the rhetoric and pragmatics of science (and other human collaborations), not on metaphysics or epistemology.

The reason his answer is not obvious is that it assumes that what is desirable for the aliens must necessarily be desirable for us.

Ideally desirable, not actually desired.

In other words, it assumes a universal morality rather than a merely "objective" one (he assumes a universally compelling moral argument, to put it in less wrong terms).

No. See his response to the Problem of Persuasion; he doesn't care whether the One True Morality would persuade everyone to be perfectly moral; he assumes it won't. His claim about aliens is an assertion about his equivalent of our coherently extrapolated moral volition; it's not a claim about what arguments we would currently find compelling.