TheOtherDave comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 16 May 2012 02:51:28PM 0 points [-]

I usually treat this behavior as something similar to the availability heuristic.

That is, there's a theory that one of the ways humans calibrate our estimates of the likelihood of an event X is by trying to imagine an instance of X, and measuring how long that takes, and calculating our estimate of probability inverse-proportionally to the time involved. (This process is typically not explicitly presented to conscious awareness.) If the imagined instance of X is immediately available, we experience high confidence that X is true.

That mechanism makes a certain amount of rough-and-ready engineering sense, though of course it has lots of obvious failure modes, especially as you expand the system's imaginative faculties. Many of those failure modes are frequently demonstrated in modern life.

The thing is, we use much of the same machinery that we evolved for considering events like "a tiger eats my children" to consider pseudo-events like "a tiger eating my children is a bad thing." So it's easy for us to calibrate our estimates of the likelihood that a tiger eating my children is a bad thing in the same way: if an instance of a tiger eating my children feeling like a bad thing is easy for me to imagine, I experience high confidence that the proposition is true. It just feels obvious.

I don't think this is quite the same thing as moral realism, but when that judgment is simply taken as an input without being carefully examined, the result is largely equivalent.

Conversely, the more easily I can imagine a tiger eating my children not feeling like a bad thing, the lower that confidence. More generally, the more I actually analyze (rather than simply referencing) my judgments, the less compelling this mechanism becomes.

What I expect, given the above, is that if I want to shake someone off that kind of naive moral realist position, it helps to invite them to consider situations in which they arrive at counterintuitive (to them) moral judgments. The more I do this, the less strongly the availability heuristic fires, and over time this will weaken that leg of their implicit moral realism, even if I never engage with it directly.

I've known a number of people who react very very negatively to being invited to consider such situations, though, even if they don't clearly perceive it as an attack on their moral confidence.

Comment author: Stuart_Armstrong 16 May 2012 02:55:27PM 1 point [-]

More generally, the more I actually analyze (rather than simply referencing) my judgments, the less compelling this mechanism becomes.

it helps to invite them to consider situations in which they arrive at counterintuitive (to them) moral judgments

But philosophers are extremely fond of analysis, and make great use of trolley problems and similar edge cases. I'm really torn - people who seem very smart and skilled in reasoning take positions that seem to make no sense. I keep telling myself that they are probably right and I'm wrong, but the more I read about their justifications, the less convincing they are...

Comment author: TheOtherDave 16 May 2012 03:23:31PM 1 point [-]

Yeah, that's fair. Not all philosophers do this, any more than all computer programmers come up with test cases to ensure their code is doing what it ought, but I agree it's a common practice.

Can you summarize one of those positions as charitably as you're able to? It might be that given that someone else can offer an insight that extends that structure.

Comment author: Stuart_Armstrong 16 May 2012 03:37:56PM *  2 points [-]

"There are sets of objective moral truths such that any rational being that understood them would be compelled to follow them". The arguments seem mainly to be:

1) Playing around with the meaning of rationality until you get something ("any rational being would realise their own pleasure is no more valid than that of others" or "pleasure is the highest principle, and any rational being would agree with this, or else be irrational")

2) Convergence among human values.

3) Moral progress for society: we're better than we used to be, so there needs to be some scale to measure the improvements.

4) Moral progress for individuals: when we think about things a lot, we make better moral decisions than when we were young and naive. Hence we're getting better a moral reasoning, so these is some scale on which to measure this.

5) Playing around with the definition of "truth-apt" (able to have a valid answer) in ways that strike me, uncharitably, as intuition-pumping word games. When confronted with this, I generally end up saying something like "my definitions do not map on exactly to yours, so your logical steps are false dichotomies for me".

6) Realising things like "if you can't be money pumped, you must be an expected utility maximiser", which implies that expected utility maximisation is superior to other reasoning, hence that there are some methods of moral reasoning which are strictly inferior. Hence there must be better ways of moral reasoning and (this is the place where I get off) a single best way (though that argument is generally implicit, never explicit).

Comment author: TheOtherDave 16 May 2012 04:40:49PM 5 points [-]

(nods) Nice.

OK, so let me start out by saying that my position is similar to yours... that is, I think most of this is nonsense. But having said that, and trying to adopt the contrary position for didactic purposes... hm.

So, a corresponding physical-realist assertion might be that there are sets of objective physical structures such that any rational being that perceived the evidence for them would be compelled to infer their existence. (Yes?)

Now, why might one believe such a thing? Well, some combination of reasons 2-4 seems to capture it.

That is: in practice, there at least seem to be physical structures we all infer from our senses such that we achieve more well-being with less effort when we act as though those structures existed. And there are other physical structures that we infer the existence of via a more tenuous route (e.g., the center of the Earth, or Alpha Centauri, or quarks, or etc.), to which #2 doesn't really apply (most people who believe in quarks have been taught to believe in them by others; they mostly didn't independently converge on that belief), but 3 and 4 do... when we posit the existence of these entities, we achieve worthwhile things that we wouldn't achieve otherwise, though sometimes it's very difficult to express clearly what those things actually are. (Yes?)

So... ok. Does that case for physical realism seem compelling to you?
If so, and if arguments 2-4 are sufficient to compel a belief in physical realism, why are their analogs insufficient to compel a belief in moral realism?

Comment author: Stuart_Armstrong 17 May 2012 12:20:23PM 0 points [-]

So... ok. Does that case for physical realism seem compelling to you?

No - to me it just highlights the difference between physical facts and moral facts, making them seem very distinct. But I can see how if we had really strong 2-4, it might make more sense...

Comment author: TheOtherDave 17 May 2012 01:13:33PM 2 points [-]

I'm not quite sure I understood you. Are you saying "no," that case for physical realism doesn't seem compelling to you? Or are you saying "no," the fact that such a case can compellingly be made for physical realism does not justify an analogous case for moral realism?

Comment author: Stuart_Armstrong 18 May 2012 10:57:57AM 0 points [-]

The second one!

Comment author: TheOtherDave 18 May 2012 02:28:42PM *  3 points [-]

So, given a moral realist, Sam, who argued as follows:

"We agree that humans typically infer physical facts such that we achieve more well-being with less effort when we act as though those facts were actual, and that this constitutes a compelling case for physical realism. It seems to me that humans typically infer moral facts such that we achieve more well-being with less effort when we act as though those facts were actual, and I consider that an equally compelling case for moral realism."

...it seems you ought to have a pretty good sense of why Sam is a moral realist, and what it would take to convince Sam they were mistaken.

No?

Comment author: Stuart_Armstrong 18 May 2012 03:43:47PM 0 points [-]

Interesting perspective. Is this an old argument, or a new one? (seems vaguely similar to the Pascalian "act as if you believe, and that will be better for you").

It might be formalisable in terms of bounded agents and stuff. What's interesting is that though it implies moral realism, it doesn't imply the usual consequence of moral realism (that all agents converge on one ethics). I'd say I understood Sam's position, and that he has no grounds to disbelieve orthogonality!

Comment author: Peterdjones 13 March 2014 08:08:50PM 1 point [-]

I could add: Objective punishments and rewards need objective justification.

Comment author: Peterdjones 13 March 2014 07:35:20PM -1 points [-]

From my perspective, treating rationality as always instrumental, and never a terminal value is playing around with it's traditional meaning. (And indiscriminately teaching instrumental rationality is like indiscriminately handing out weapons. The traditional idea, going back to st least Plato, is that teaching someone to be rational improves them...changes their values)