whowhowho comments on Decision Theory FAQ - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (467)
I think DP is saying that Clippy could not both understand suffering and cause suffering in the pursuit of clipping. The subsidiary arguments are:-
I'm trying to figure out how you get from "hypersocial sentients understand the mind of Mr Clippy better than Clippy understands the minds of sentients" to "Mr Clippy could not both understand suffering and cause suffering in the pursuit of clipping" and I'm just at a loss for where to even start. They seem like utterly unrelated claims to me.
I also find the argument you quote here uncompelling, but that's largely beside the point; even if I found it compelling, I still wouldn't understand how it relates to what DP said or to the question I asked.
I (whowhowho) was not defending that claim.
To empathically understand suffering is to suffer along with someone who is suffering. Suffering has --or rather is -- negative value. An empath would not therefore cause suffering, all else being equal.
Maybe don't restrict "understand" to "be able to model and predict".
If you want "rational" to include moral, then you're not actually disagreeing with LessWrong about rationality (the thing), but rather about "rationality" (the word).
Likewise if you want "understanding" to also include "empathic understanding" (suffering when other people suffer, taking joy when other people take joy), you're not actually disagreeing about understanding (the thing) with people who want to use the word to mean "modelling and predicting" you're disagreeing with them about "understanding" (the word).
Are all your disagreements purely linguistic ones? From the comments I've read of you so far, they seem to be so.
ArisKatsaris, it's possible to be a meta-ethical anti-realist and still endorse a much richer conception of what understanding entails than mere formal modeling and prediction. For example, if you want to understand what it's like to be a bat, then you want to know what the textures of echolocatory qualia are like. In fact, any cognitive agent that doesn't understand the character of echolocatory qualia-space does not understand bat-minds. More radically, some of us want to understand qualia-spaces that have not been recruited by natural selection to play any information-signalling role at all.
I have argued that in practice, instrumental rationality cannot be maintained seprately from epistemic rationality, and that epistemic rationality could lead to moral objectivism, as many philosophers have argued. I don't think that those arguments are refuted by stipulatively defining "rationality" as "nothing to do with morality".
I quoted DP making that claim, said that claim confused me, and asked questions about what that claim meant. You replied by saying that you think DP is saying something which you then defended. I assumed, I think reasonably, that you meant to equate the thing I asked about with the thing you defended.
But, OK. If I throw out all of the pre-existing context and just look at your comment in isolation, I would certainly agree that Clippy is incapable of having the sort of understanding of suffering that requires one to experience the suffering of others (what you're calling a "full" understanding of suffering here) without preferring not to cause suffering, all else being equal.
Which is of course not to say that all else is necessarily equal, and in particular is not to say that Clippy would choose to spare itself suffering if it could purchase paperclips at the cost of its suffering, any more than a human would necessarily refrain from doing something valuable solely because doing so would cause them to suffer.
Posthuman superintelligence may be incomprehensibly alien. But if we encountered an agent who wanted to maximise paperclips today, we wouldn't think, ""wow, how incomprehensibly alien", but, "aha, autism spectrum disorder". Of course, in the context of Clippy above, we're assuming a hypothetical axis of (un)clippiness whose (dis)valuable nature is supposedly orthogonal to the pleasure-pain axis. But what grounds have we for believing such a qualia-space could exist? Yes, we have strong reason to believe incomprehensibly alien qualia-spaces await discovery (cf. bats on psychedelics). But I haven't yet seen any convincing evidence there could be an alien qualia-space whose inherently (dis)valuable textures map on to the (dis)valuable textures of the pain-pleasure axis. Without hedonic tone, how can anything _matter _ at all?
Meaning mapping the wrong way round, presumably.
Good question.
Agreed, as far as it goes. Hell, humans are demonstrably capable of encountering Eliza programs without thinking "wow, how incomprehensibly alien".
Mind you, we're mistaken: Eliza programs are incomprehensibly alien, we haven't the first clue what it feels like to be one, supposing it even feels like anything at all. But that doesn't stop us from thinking otherwise.
Sure, that's one thing we might think instead. Agreed.
(shrug) I'm content to start off by saying that any "axis of (dis)value," whatever that is, which is capable of motivating behavior is "non-orthogonal," whatever that means in this context, to "the pleasure-pain axis," whatever that is.
Before going much further, though, I'd want some confidence that we were able to identify an observed system as being (or at least being reliably related to) an axis of (dis)value and able to determine, upon encountering such a thing, whether it (or the axis to which it was related) was orthogonal to the pleasure-pain axis or not.
I don't currently have any grounds for such confidence, and I doubt anyone else does either. If you think you do, I'd like to understand how you would go about making such determinations about an observed system.