Comment author: SaidAchmiz 07 May 2014 07:19:57AM 1 point [-]

Please don't use a custom font. It makes it harder to read.

Comment author: Kaj_Sotala 06 May 2014 05:13:48AM 1 point [-]

Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure. All these people who claim to care about all sorts of other things must be misguided!

I don't think that hedonistic utilitarianism necessarily implies moral realism. Some HUs will certainly tell you that the people who morally disagree with them are misguided, but I don't see why the proportion of HUs who think so (vs. the proportion of HUs who think that you are simply caring about different things) would need to be any different than it would be among the adherents of any other ethical position.

Maybe you meant your comment to refer specifically to the kinds of HUs who would impose their position on you, but even then the moral realism doesn't follow. You can want to impose your values on others despite thinking that values are just questions of opinion. For instance, there are things that I consider basic human rights and I want to impose the requirement to respect them on every member of every society, even though there are people who would disagree with that requirement. I don't think that the people who disagree are misguided in any sense, I just think that they value different things.

Comment author: SaidAchmiz 07 May 2014 01:20:13AM 0 points [-]

I agree with blacktrance's reply to you, and also see my reply to tog in a different subthread for some commentary. However, I'm sufficiently unsure of what you're saying to be certain that your comment is fully answered by either of those things. For example:

HUs who think that you are simply caring about different things

If you [the hypothetical you] think that it's possible to care (intrinsically, i.e. terminally) about things other than pain and pleasure, then I'm not quite sure how you can remain a hedonistic utilitarian. You'd have to say something like: "Yes, many people intrinsically value all sorts of things, but those preferences are morally irrelevant, and it is ok to frustrate those preferences as much as necessary, in order to minimize pain and maximize pleasure." You would, in other words, have to endorse a world where all the things that people value are mercilessly destroyed, and the things they most abhor and despise come to pass, if only this world had the most pleasure and least pain.

Now, granted, people sometimes endorse the strangest things, and I wouldn't even be surprised to find someone on Lesswrong who held such a view, but then again I never claimed otherwise. What I said was that I should hope those people do not impose such a worldview on me.

If I've misinterpreted your comment and thereby failed to address your points, apologies; please clarify.

Comment author: tog 06 May 2014 11:27:50PM 2 points [-]

Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure.

They may think it's incorrect if they're realists, or cognitivists of some other form. But this has nothing to do with their being HUs, only with their being cognitivists.

[Description of situation] ... well, I hope you can see how that would bother me.

Here are 3 non-exhaustive ways in which the situation you described could be bothersome:

(i) If your first order ethical theory (as opposed to your meta-ethics), perhaps combined with very plausible facts about human nature, requires otherwise. For instance if it speaks in favour of toleration or liberty here.

(ii) If you're a cognitivist of the sort who thinks she could be wrong, it could increase your credence that you're wrong.

(iii) If you'd at least on reflection give weight to the evident distress SaidAchmiz feels in this scenario, as most HUs would.

Comment author: SaidAchmiz 07 May 2014 01:08:43AM *  0 points [-]

Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure.

They may think it's incorrect if they're realists, or cognitivists of some other form. But this has nothing to do with their being HUs, only with their being cognitivists.

No, I don't think this is right. I think you (and Kaj_Sotala) are confusing these two questions:

  1. Is it correct to hold an ethical view that is something other than hedonistic utilitarianism?
  2. Does it make any sense to intrinsically value anything other than pleasure, or intrinsically disvalue things other than pain?

#1 is a meta-ethical question; moral realism or cognitivism may lead you to answer "no", if you're a hedonistic utilitarian. #2 is an ethical question; it's about the content of hedonistic utilitarianism.

If I intrinsically care about, say, freedom, that's not an ethical claim. It's just a preference. "Humans may have preferences about things other than pain/pleasure, and those preferences are morally important" is an ethical claim which I might formulate, about that preference that I have.

Hedonistic utilitarianism tells me that my aforementioned preference is incoherent or mistaken, and that in fact I do not have any preferences (or any preferences that are morally important or worth caring about) other than preferences about pleasure/pain.

Moral realism (which, as blacktrance correctly notes, is implied by any utilitarianism) may lead a hedonistic utilitarian to say that my aforementioned ethical claim is incorrect.

As for your scenarios, I'm not sure what you meant by listing them. My point was that my scenario, which describes a situation involving a hypothetical me, Said Achmiz, would be bothersome to me, Said Achmiz. Is it really not clear why it would be?

Comment author: Kaj_Sotala 06 May 2014 05:03:22AM 0 points [-]

In that case, I'm unsure of what kind of an answer you were expecting (unless the "what then" was meant as a rhetorical question, but even then I'm slightly unsure of what point it was making).

Comment author: SaidAchmiz 06 May 2014 08:00:35PM 1 point [-]

Yes, the "what then" was rhetorical. If I had to express my point non-rhetorically, it'd be something like this:

If you take a position which gives ethically correct results only until such time as some (reasonably plausible) scenario comes to pass, then maybe your position isn't ethical in the first place. "This ethical framework gives nonsensical or monstrous results in edge cases [of varying degrees of edge-ness]" is, after all, a common and quite justified criticism of ethical frameworks.

Comment author: Louie 05 May 2014 11:14:44AM *  43 points [-]

2009: "Extreme Rationality: It's Not That Great"

2010: "Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality"

2013: "How about testing our ideas?"

2014: "Truth: It's Not That Great"

2015: "Meta-Countersignaling Equilibria Drift: Can We Accelerate It?"

2016: "In Defense Of Putting Babies In Wood Chippers"

Comment author: SaidAchmiz 05 May 2014 09:27:42PM 8 points [-]

2016: "In Defense Of Putting Babies In Wood Chippers"

Heck, I could write that post right now. But what's it got to do with truth and such?

Comment author: Kaj_Sotala 05 May 2014 03:15:56PM *  2 points [-]

Let's get clear on what we actually believe, I generally think; once we've firmly established that, we can look for maximally effective implementations.

For another thing, HU may be the best approximation etc. etc., but that's a claim that at least should be made explicitly

I agree that it would often be good to be clearer about these points.

For a third thing, what happens when forcibly rewiring people's brains becomes a realistic option?

At that point the people who consider themselves hedonistic utilitarians might come up with a theory that says that forcible wireheading is wrong and switch to calling themselves supporters of that theory. Or they could go on calling themselves HUs despite not forcibly wireheading anyone, in the same way that many people call themselves utilitarians today despite not actually giving most of their income away. Or some of them could decide to start working towards efforts to forcibly wirehead everyone, in which case they'd become the kinds of people described by my reply 2).

"Only approving of those behaviors that serve to promote HU" is, I think, a separate thing. Or at least, I'd need to see the concept expanded a bit more before I could judge.

By this, I meant to say "only approve of whatever course of action HU says is the best one".

Comment author: SaidAchmiz 05 May 2014 09:09:23PM 2 points [-]

At that point ... [various possibilities]

Yeah, I meant that as a normative "what then", not an empirical one. I agree that what you describe are plausible scenarios.

Comment author: tog 05 May 2014 08:09:27PM 3 points [-]

There could indeed be people who accept HU because that's what correctly describes their moral intuitions. (Though I should certainly hope they do not think it proper to impose that moral philosophy on me, or on anyone else who doesn't subscribe to HU!)

Why would this be improper? Don't that it doesn't follow from any meta-ethical position.

Comment author: SaidAchmiz 05 May 2014 08:20:11PM 1 point [-]

If you say "all that matters is pain and pleasure", and I say "no! I care about other things!", and you're like "nope, not listening. PAIN AND PLEASURE ARE THE ONLY THINGS", and then proceed to enact policies which minimize pain and maximize pleasure, without regard for any of the other things that I care about, and all the while I'm telling you that no, I care about these other things! Stop ignoring them! Other things matter to me! but you're not listening because you've decided that only pain and pleasure can possibly matter to anyone, despite my protestations otherwise...

... well, I hope you can see how that would bother me.

It's not just a matter of us caring about different things. If it were only that, we could acknowledge the fact, and proceed to some sort of compromise. Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure. All these people who claim to care about all sorts of other things must be misguided! Clearly.

Comment author: Vaniver 05 May 2014 07:21:24PM 2 points [-]

Ditto for Vaniver and Said.

I approve of virtuous acts, and disapprove of vicious ones.

In terms of labels, I think I give consequentialist answers to the standard ethical questions, but I think most character improvement comes from thinking deontologically, because of the tremendous amount of influence our identities have on our actions. If one thinks of oneself as humble, that has many known ways of making one act differently. One's abstract, far mode views are likely to only change one's speech, not one's behavior. Thus, I don't put all that much effort into theories of ethics, and try to put effort instead into acting virtuously.

Comment author: SaidAchmiz 05 May 2014 07:54:01PM 1 point [-]

Interestingly, it seems our views are complementary, not contradictory. I would (I think) be willing to endorse what you said as a recipe for implementing the views I describe.

Comment author: tog 05 May 2014 05:54:26PM 1 point [-]

Ditto for Vaniver and Said.

Comment author: SaidAchmiz 05 May 2014 06:57:32PM 1 point [-]

There is no such centralized place, no; I've alluded to my views in comments here and there over the past year or so, but haven't gone laid them out fully. (Then again, I'm a member of no movements that depend heavily on any ethical positions. ;)

Truth be told — and I haven't disguised this — my ethical views are not anywhere near completely fleshed-out. I know the general shape, I suppose, but beyond that I'm more sure about what I don't believe — what objections and criticisms I have to other people's views — than about what I do believe. But here's a brief sketch.

I think that consequentialism, as a foundational idea, a basic approach, is the only one that makes sense. Deontology seems to me to be completely nonsensical as a grounding for ethics. Every seemingly-intelligent deontologist to whom I've spoken (which, admittedly, is a small number — a handful of people here in LessWrong) has appeared to be spouting utter nonsense. Deontology has its uses (see Bostrom's "An Infinitarian Challenge to Aggregative Ethics", and this post by Eliezer, for examples), but there it's deployed for consequentialist reasons: we think it'll give better results. I've seen the view expressed that virtue ethics is descriptively correct as an account of how human minds implement morality, and (as a result) prescriptively valid as a recommendation of how to implement your morality in your own mind once you've decided on your object-level moral views, and that seems like a more-or-less reasonable stance to take. As an actual philosophical grounding for morality, virtue ethics is nonsense, but perhaps that's fine, given the above. Consequentialism actually makes sense. Consequences are the only things that matter? Well, yes. What else could there be?

As far as varieties of consequentialism go... I think intended and foreseeable consequences matter when evaluating the moral rightness of an act, not actual consequences; judging based on actual consequences seems utterly useless, because then you can't even apply decision theory to the problem of deciding how to act. Judging on actual consequences also utterly fails to accord with my moral intuitions, while judging on intended and foreseeable consequences fits quite well.

I tend toward rule consequentialism rather than act consequentialism; I ask not "what would be the consequences of such an act?", but "what sort of world would it be like, where [a suitably generalized class of] people acted in this [suitably generalized] way? Would I want to live in such a world?", or something along those lines. I find act consequentialism to be too often short-sighted, and open to all sorts of dilemmas to which rule consequentialism simply does not fall prey.

I take seriously the complexity of value, and think that hedonistic utilitiarianism utterly fails to capture that complexity. I would not want to live in a world ruled by hedonistic utilitiarians. I wouldn't want to hand them control of the future. I generally think that preferences are what's important, and ought to be satisfied — I don't think there's any such thing as intrinsically immoral preferences (not even the preference to torture children), although of course one might have uninformed preferences (no, Mr. Example doesn't really want to drink that glass of acid; what he wants is a glass of beer, and his apparent preference for acid would dissolve immediately, were he apprised of the facts); and satisfying certain preferences might introduce difficult conflicts (the fellow who wants to torture children — well, if satisfying his preferences would result in actual children being actually tortured, then I'm afraid we couldn't have that). "I prefer to kill myself because I am depressed" is genuinely problematic, however. That's an issue that I think about often.

All that seems like it might make me a preference utilitiarian, or something like it, but as I've said, I'm highly skeptical about the possibility or even coherence of aggregating utility across individuals, not to mention the fact that I don't think my own preferences adhere to the VNM axioms, and so it may not even be possible to construct a utility function for all individuals. (The last person with whom I was discussing this stopped commenting on Lesswrong before I could get hold of my copy of Rational Choice in an Uncertain World, but now I've got it, and I'm willing to discuss the matter, if anyone likes.)

I don't think it's obvious that all beings that matter, matter equally. I don't see anything wrong with valuing my mother much more than I value a randomly selected stranger in Mongolia. It's not just that I do, in fact, value my mother more; I think it's right that I should. My family and friends more than strangers; members of my culture (whatever that means, which isn't necessarily "nation" or "country" or any such thing, though these things may be related) more than members of other cultures... this seems correct to me. (This seems to violate both the "equal consideration" and "agent-neutrality" aspects of classical utilitarianism, to again tie back to the SEP breakdown.)

As far as who matters — to a first approximation, I'd say it's something like "beings intelligent and self-aware enough to consciously think about themselves". Human-level intelligence and subjective consciousness, in other words. I don't think animals matter. I don't think unborn children matter, nor do infants (though there are nonetheless good reasons for not killing them, having to do with bright lines and so forth; similar considerations may protect the severely mentally disabled, though this is a matter which requires much further thought).

Do these thoughts add up to a coherent ethical system? Unlikely. They're what I've got so far, though. Hopefully you find them at least somewhat useful, and of course feel free to ask me to elaborate, if you like.

Comment author: Kaj_Sotala 05 May 2014 03:01:07PM 1 point [-]

even if there's the ability to measure people's satisfaction objectively (so that we can count the transparency problem as solved), that doesn't tell us how to make satisfaction tradeoffs between individuals.

I agree with this. I was originally only objecting to the argument that aggregating utility between individuals would be impossible or incoherent, but I do not have an objection to the argument that the mapping from subjective states to math is underspecified. (Though I don't see this as a serious problem for utilitarianism: it only means that different people will have different mappings rather than there being a single unique one.)

Comment author: SaidAchmiz 05 May 2014 06:15:08PM 1 point [-]

I was originally only objecting to the argument that aggregating utility between individuals would be impossible or incoherent

Er, hang on. If this is your objection, I'm not sure that you've actually said what's wrong with said argument. Or do you mean that you were objecting to the applicability of said argument to hedonistic utilitarianism, which is how I read your comments?

View more: Prev | Next