TheOtherDave comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (156)
No, I cannot. I've read the various papers, and they all orbit around an implicit and often unstated moral realism. I've also debated philosophers on this, and the same issue rears its head - I can counter their arguments, but their opinions don't shift. There is an implicit moral realism that does not make any sense to me, and the more I analyse it, the less sense it makes, and the less convincing it becomes. Every time a philosopher has encouraged me to read a particular work, it's made me find their moral realism less likely, because the arguments are always weak.
I can't really put myself in their shoes to successfully argue their position (which I could do with theism, incidentally). I've tried and failed.
If someone can help we with this, I'd be most grateful. Why does "for reasons we don't know, any being will come to share and follow specific moral principles (but we don't know what they are)", rise to seem plausible?
I usually treat this behavior as something similar to the availability heuristic.
That is, there's a theory that one of the ways humans calibrate our estimates of the likelihood of an event X is by trying to imagine an instance of X, and measuring how long that takes, and calculating our estimate of probability inverse-proportionally to the time involved. (This process is typically not explicitly presented to conscious awareness.) If the imagined instance of X is immediately available, we experience high confidence that X is true.
That mechanism makes a certain amount of rough-and-ready engineering sense, though of course it has lots of obvious failure modes, especially as you expand the system's imaginative faculties. Many of those failure modes are frequently demonstrated in modern life.
The thing is, we use much of the same machinery that we evolved for considering events like "a tiger eats my children" to consider pseudo-events like "a tiger eating my children is a bad thing." So it's easy for us to calibrate our estimates of the likelihood that a tiger eating my children is a bad thing in the same way: if an instance of a tiger eating my children feeling like a bad thing is easy for me to imagine, I experience high confidence that the proposition is true. It just feels obvious.
I don't think this is quite the same thing as moral realism, but when that judgment is simply taken as an input without being carefully examined, the result is largely equivalent.
Conversely, the more easily I can imagine a tiger eating my children not feeling like a bad thing, the lower that confidence. More generally, the more I actually analyze (rather than simply referencing) my judgments, the less compelling this mechanism becomes.
What I expect, given the above, is that if I want to shake someone off that kind of naive moral realist position, it helps to invite them to consider situations in which they arrive at counterintuitive (to them) moral judgments. The more I do this, the less strongly the availability heuristic fires, and over time this will weaken that leg of their implicit moral realism, even if I never engage with it directly.
I've known a number of people who react very very negatively to being invited to consider such situations, though, even if they don't clearly perceive it as an attack on their moral confidence.
But philosophers are extremely fond of analysis, and make great use of trolley problems and similar edge cases. I'm really torn - people who seem very smart and skilled in reasoning take positions that seem to make no sense. I keep telling myself that they are probably right and I'm wrong, but the more I read about their justifications, the less convincing they are...
Yeah, that's fair. Not all philosophers do this, any more than all computer programmers come up with test cases to ensure their code is doing what it ought, but I agree it's a common practice.
Can you summarize one of those positions as charitably as you're able to? It might be that given that someone else can offer an insight that extends that structure.
"There are sets of objective moral truths such that any rational being that understood them would be compelled to follow them". The arguments seem mainly to be:
1) Playing around with the meaning of rationality until you get something ("any rational being would realise their own pleasure is no more valid than that of others" or "pleasure is the highest principle, and any rational being would agree with this, or else be irrational")
2) Convergence among human values.
3) Moral progress for society: we're better than we used to be, so there needs to be some scale to measure the improvements.
4) Moral progress for individuals: when we think about things a lot, we make better moral decisions than when we were young and naive. Hence we're getting better a moral reasoning, so these is some scale on which to measure this.
5) Playing around with the definition of "truth-apt" (able to have a valid answer) in ways that strike me, uncharitably, as intuition-pumping word games. When confronted with this, I generally end up saying something like "my definitions do not map on exactly to yours, so your logical steps are false dichotomies for me".
6) Realising things like "if you can't be money pumped, you must be an expected utility maximiser", which implies that expected utility maximisation is superior to other reasoning, hence that there are some methods of moral reasoning which are strictly inferior. Hence there must be better ways of moral reasoning and (this is the place where I get off) a single best way (though that argument is generally implicit, never explicit).
(nods) Nice.
OK, so let me start out by saying that my position is similar to yours... that is, I think most of this is nonsense. But having said that, and trying to adopt the contrary position for didactic purposes... hm.
So, a corresponding physical-realist assertion might be that there are sets of objective physical structures such that any rational being that perceived the evidence for them would be compelled to infer their existence. (Yes?)
Now, why might one believe such a thing? Well, some combination of reasons 2-4 seems to capture it.
That is: in practice, there at least seem to be physical structures we all infer from our senses such that we achieve more well-being with less effort when we act as though those structures existed. And there are other physical structures that we infer the existence of via a more tenuous route (e.g., the center of the Earth, or Alpha Centauri, or quarks, or etc.), to which #2 doesn't really apply (most people who believe in quarks have been taught to believe in them by others; they mostly didn't independently converge on that belief), but 3 and 4 do... when we posit the existence of these entities, we achieve worthwhile things that we wouldn't achieve otherwise, though sometimes it's very difficult to express clearly what those things actually are. (Yes?)
So... ok. Does that case for physical realism seem compelling to you?
If so, and if arguments 2-4 are sufficient to compel a belief in physical realism, why are their analogs insufficient to compel a belief in moral realism?
No - to me it just highlights the difference between physical facts and moral facts, making them seem very distinct. But I can see how if we had really strong 2-4, it might make more sense...
I'm not quite sure I understood you. Are you saying "no," that case for physical realism doesn't seem compelling to you? Or are you saying "no," the fact that such a case can compellingly be made for physical realism does not justify an analogous case for moral realism?
The second one!
So, given a moral realist, Sam, who argued as follows:
"We agree that humans typically infer physical facts such that we achieve more well-being with less effort when we act as though those facts were actual, and that this constitutes a compelling case for physical realism. It seems to me that humans typically infer moral facts such that we achieve more well-being with less effort when we act as though those facts were actual, and I consider that an equally compelling case for moral realism."
...it seems you ought to have a pretty good sense of why Sam is a moral realist, and what it would take to convince Sam they were mistaken.
No?
I could add: Objective punishments and rewards need objective justification.
From my perspective, treating rationality as always instrumental, and never a terminal value is playing around with it's traditional meaning. (And indiscriminately teaching instrumental rationality is like indiscriminately handing out weapons. The traditional idea, going back to st least Plato, is that teaching someone to be rational improves them...changes their values)