I don't think that anyone has ever doubted that science might be relevant in computing the expected consequences of actions.
Indeed. Put differently, science bears upon instrumental issues but not terminal ones. What would falsify this idea would be an example of new factual knowledge changing someone's perception of the moral value of some action, with this change persisting even after adjusting for the effect the knowledge has on the instrumental value of the action.
Neither Harris nor Academian seems to have provided such an example, and I'm not sure one exists. Following are two examples of a slightly different type that also seem to fail.
Alice thinks homosexuality is immoral because it's unnatural. Bob tells her that there are cases of animal homosexuality. Alice decides that it's not unnatural and that it isn't wrong. (But isn't being natural the end, with sexuality being merely a means, such that what we see here is still just a revaluation of instruments?)
Alice thinks it's wrong to X until Bob tells her about an evopsych theory under which condemning X was adaptive before people invented farming. Condemning X is not obviously adaptive or maladaptive today. Alice stops condemning X because she thinks her disapproval of it was just a mind trick and she'd rather not expend effort condeming things that aren't "really wrong." (Again, the end here is some sort of mental energy economy, while the instrument is her moral belief set?)
That said, I'm not too comfortable with the idea that new knowledge has no effect on terminal values. This is because the other contenders for influence on terminal values (e.g. ancient instinct) seem decidedly less open to my control.
P.S. I'm rather new here, and have not finished the sequences. If I've missed something that's already been covered, I'd love a point in the correct direction.
...science bears upon instrumental issues but not terminal ones.
For what I consider non-obvious reasons, I disagree. As you say (and thanks for pointing this out explicitly),
What would falsify this idea would be an example of new factual knowledge changing someone's perception of the moral value of some action, with this change persisting even after adjusting for the effect the knowledge has on the instrumental value of the action.
I have undergone changes in values that I would describe in this way. Namely, I had something I considered a terminal...
tl;dr: Relativism bottoms-out in realism by objectifying relations between subjective notions. This should be communicated using concrete examples that show its practical importance. It implies in particular that morality should think about science, and science should think about morality.
Sam Harris attacks moral uber-relativism when he asserts that "Science can answer moral questions". Countering the counterargument that morality is too imprecise to be treated by science, he makes an excellent comparison: "healthy" is not a precisely defined concept, but no one is crazy enough to utter that medicine cannot answer questions of health.
What needs adding to his presentation (which is worth seeing, though I don't entirely agree with it) is what I consider the strongest concise argument in favor of science's moral relevance: that morality is relative simply means that the task of science is to examine absolute relations between morals. For example, suppose you uphold the following two moral claims:
First of all, note that questions of causality are significantly more accessible to science than people before 2000 thought was possible. Now suppose a cleverly designed, non-invasive causal analysis found that physically punishing children, frequently or infrequently, causes them to be more likely to commit criminal violence as adults. Would you find this discovery irrelevant to your adherence to these morals? Absolutely not. You would reflect and realize that you needed to prioritize them in some way. Most would prioritize the second one, but in any case, science will have made a valid impact.
So although either of the two morals is purely subjective on its own, how these morals interrelate is a question of objective fact. Though perhaps obvious, this idea has some seriously persuasive consequences and is not be taken lightly. Why?
First of all, you might change your morals in response to them not relating to each other in the way you expected. Ideas parse differently when they relate differently. "Teachers should be allowed to physically punish their students" might never feel the same to you after you find out it causes adult violence. Even if it originally felt like a terminal (fundamental) value, your prioritization of (2) might make (1) slowly fade out of your mind over time. In hindsight, you might just see it as an old, misinformed instrumental value that was never in fact terminal.
Second, as we increase the number of morals under consideration, the number of relations for science to consider grows rapidly, as (n2-n)/2: we have many more moral relations than morals themselves. Suddenly the old disjointed list of untouchable maxims called "morals" fades into the background, and we see a throbbing circulatory system of moral relations, objective questions and answers without which no person can competently reflect on her own morality. A highly prevalent moral like "human suffering is undesirable" looks like a major organ: important on its own to a lot of people, and lots of connections in and out for science to examine.
Treating relativistic vertigo
To my best recollection, I have never heard the phrase "it's all relative" used to an effect that didn't involve stopping people from thinking. When the topic of conversation — morality, belief, success, rationality, or what have you — is suddenly revealed or claimed to depend on a context, people find it disorienting, often to the point of feeling the entire discourse has been and will continue to be "meaningless" or "arbitrary". Once this happens, it can be very difficult to persuade them to keep thinking, let alone thinking productively…
To rebuke this sort of conceptual nihilism, it's natural to respond with analogies to other relative concepts that are clearly useful to think about:
"Position, momentum, and energy are only relatively defined as numbers, but we don't abandon scientific study of those, do we?"
While an important observation, this inevitably evokes the "But that's different" analogy-immune response. The real cure is in understanding explicitly what to do with relative notions:
To use one of these lines of argument effectively — and it can be very effective — one should follow up immediately with a specific example in the case you're talking about. Don't let the conversation drift in abstraction. If you're talking about morality, there is no shortage of objective moral relations that science can handle, so you can pick one at random to show how easy and common it is:
"Teen pregnancy / the spread of STDs is undesirable."
Question: Does promoting the use of condoms increase or decrease teen pregnancy rates / the spread of STDs?
"Married couples should do their best not to cheat on each other."
Question: Does masturbation increase or decrease adulterous impulses over time?
"Children should not be raised in psychologically damaging environments."
Question: What are the psychological effects of being raised by gay parents?
I'm not advocating here any of these particular moral claims, nor any particular resolution between them, but simply that the answer to the given question — and many other relevant ones — puts you in a much better position to reflect on these issues. Your opinion after you know the answer is more valuable than before.
"But of course science can answer some moral questions... the point is that it can't answer all of them. It can't tell us ultimately what is good or evil."
No. That is not the point. The point is whether you want teachers to beat their students. Do you? Well, science can help you decide. And more importantly, once you do, it should help you in leading others to the same conclusion.
A lesson from history: What happens when you examine objective relations between subjective beliefs? You get probability theory… Bayesian updating… we know this story; it started around 200 years ago, and it ends well.
Now it's morality's turn.