In response to comment by SaidAchmiz on On Caring
Comment author: hyporational 13 October 2014 06:40:57PM 3 points [-]

If you could switch off pain at will would you consider the tissue damage caused by burning yourself irrelevant?

In response to comment by hyporational on On Caring
Comment author: SaidAchmiz 13 October 2014 10:25:54PM 2 points [-]

I would not. This is a fair point.

Follow-up question: are all things that we consider misfortunes similar to the "burn yourself" situation, in that there is some sort of "damage" that is part of what makes the misfortune bad, separately from and additionally to the distress/discomfort/pain involved?

In response to comment by VAuroch on On Caring
Comment author: torekp 10 October 2014 02:05:00AM *  3 points [-]

Understanding the emotional pain of others, on a non-verbal level, can lead in at least two directions, which I've usually seen called "sympathy" and "personal distress" in the psych literature. Personal distress involves seeing the problem as (primarily, or at least importantly) as one's own. Sympathy involves seeing it as that person's. Some people, including Albert Schweitzer, claim(ed) to be able to feel sympathy without significant personal distress, and as far as I can see that seems to be true. Being more like them strikes me as a worthwhile (sub)goal. (Until I get there, if ever - I feel your pain. Sorry, couldn't resist.)

Hey I just realized - if you can master that, and then apply the sympathy-without-personal-distress trick to yourself as well, that looks like it would achieve one of the aims of Buddhism.

In response to comment by torekp on On Caring
Comment author: SaidAchmiz 13 October 2014 04:25:15PM 0 points [-]

apply the sympathy-without-personal-distress trick to yourself

If you do this, would not the result be that you do not feel distress from your own misfortunes? And if you don't feel distress, what, exactly, is there to sympathize with?

Wouldn't you just shrug and dismiss the misfortune as irrelevant?

In response to comment by [deleted] on On Caring
Comment author: MugaSofer 10 October 2014 01:57:39PM 0 points [-]

I think this is the OP's point - there is no (human) mind capable of caring, because human brains aren't capable of modelling numbers that large properly. If you can't contain a mind, you can't use your usual "imaginary person" modules to shift your brain into that "gear".

So - until you find a better way! - you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.

In response to comment by MugaSofer on On Caring
Comment author: SaidAchmiz 13 October 2014 04:21:48PM 1 point [-]

you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.

Why should I act this way?

Comment author: Fronken 03 July 2014 02:54:54PM *  -2 points [-]

Also, what the heck are you talking about?

Wireheading. The term is not a metaphor, and it's not a hypothetical. You can literally stick a wire into someone's pleasure centers and activate them, using only non-groundbreaking neuroscience.

It's been tested on humans, but AFAIK no-one has ever felt compelled to go any further.

(Yeah, seems like it might be evidence. But then, maybe akrasia...)

Comment author: SaidAchmiz 03 July 2014 05:30:05PM 0 points [-]

Where and what are these "pleasure centers", exactly?

Comment author: [deleted] 11 May 2014 04:42:22AM 1 point [-]

Why are you taking the effective altruists survey?

In response to comment by [deleted] on 2014 Survey of Effective Altruists
Comment author: SaidAchmiz 11 May 2014 06:29:42AM 2 points [-]
Comment author: jkaufman 09 May 2014 05:52:07PM 2 points [-]

Whether or not it's a problem, a survey is not a good place to address it. You have to ask questions people will be able to easily answer if you want to get useful data.

Comment author: SaidAchmiz 10 May 2014 01:11:45AM 1 point [-]

You have to ask questions people will be able to easily answer if you want to get useful data.

That's true, but it is also an inherently problematic approach if (as will almost certainly be the case when it comes to issues of ethics, politics, etc.) the things you really want to know are not easily elicited by questions that people will be able to easily answer, and vice versa — the questions that people can easily answer don't actually tell you what you really want to know about those people's views, attitudes, etc.

In any case, what I meant wasn't that "EAs are not well-versed enough in moral philosophy" is a problem for the survey — what I meant was that it's a problem for the EA movement.

Comment author: Fronken 07 May 2014 09:09:31AM 0 points [-]

... can't we rewire brains right now? We just ... don't.

Comment author: SaidAchmiz 07 May 2014 09:20:18AM *  2 points [-]

Well, we must not be hedonistic utilitarians then, right? Because if we were, and we could, we would.

Edit: Also, what the heck are you talking about?

Comment author: Kaj_Sotala 07 May 2014 08:03:13AM 1 point [-]

If you take a position which gives ethically correct results only until such time as some (reasonably plausible) scenario comes to pass, then maybe your position isn't ethical in the first place. "This ethical framework gives nonsensical or monstrous results in edge cases [of varying degrees of edge-ness]" is, after all, a common and quite justified criticism of ethical frameworks.

It is a point against the framework, certainly. But so far nobody has developed an ethical framework that would have no problems at all, so at the moment we can only choose the framework that's the least bad.

(Assuming that we wish to choose one in the first place, of course - I do think that there is merit in just accepting that they're all flawed and then not choosing to endorse any single one.)

Comment author: SaidAchmiz 07 May 2014 08:21:55AM 0 points [-]

(Assuming that we wish to choose one in the first place, of course - I do think that there is merit in just accepting that they're all flawed and then not choosing to endorse any single one.)

Well, that's been my policy so far, certainly. Some are worse than others, though. "This ethical framework breaks in catastrophic, horrifying fashion, creating an instant dystopia, as soon as we can rewire people's brains" is pretty darn bad.

Comment author: Kaj_Sotala 07 May 2014 07:38:35AM *  0 points [-]

If I intrinsically care about, say, freedom, that's not an ethical claim. It's just a preference. [...]

Hedonistic utilitarianism tells me that my aforementioned preference is incoherent or mistaken, and that in fact I do not have any preferences (or any preferences that are morally important or worth caring about) other than preferences about pleasure/pain.

Ethical subjectivism (which I subscribe to) would say that "ethical claims" are just a specific subset of our preferences; indeed, I'm rather skeptical of the notion of there being a distinction between ethical claims and preferences in the first place. But HU wouldn't necessarily say that someone's preference for something else than pleasure or pain would be mistaken - if it's interpreted within a subjectivist framework, HU is just a description of preferences that are different. See my response to blacktrance.

Comment author: SaidAchmiz 07 May 2014 08:18:41AM 1 point [-]

But HU wouldn't necessarily say that someone's preference for something else than pleasure or pain would be mistaken - if it's interpreted within a subjectivist framework, HU is just a description of preferences that are different.

I really don't think that this is correct. If this were true, first of all, hedonistic utilitarianism would simply reduce to preference utilitarianism. In actual fact, neither view is merely about one's own terminal values.

If someone, personally, cares only about pain and pleasure, but acknowledges that other people may have other things as terminal values, and thinks that The Good lies in satisfying everyone's preferences maximally — which, for themselves, means maximizing pleasure and minimizing pain, and for other people may mean other things — then that person is not a hedonistic utilitarian. They are a preference utilitarian. Referring to them as an HU is simply not correct, because that's not how the term is used in the philosophical literature.

On the other hand, if someone cares only about pain and pleasure — both theirs and other peoples' — and would prefer that everyone's pleasure be maximized and everyone's pain be minimized; but this person is not a moral realist, and has no opinion on what constitutes The Good or thinks there's no fact of the matter about whether an act is right or wrong; well, then this person is not a utilitarian at all. Again, describing this person as a hedonistic or any other kind of utilitarian completely fails to match up with how the term is used in the philosophical literature.

As for ethical subjectivism — uh, I don't think that's an actual thing. I'd not heard of anything by that name until today. I don't like going by wikipedia's definitions of philosophical principles, so I tried tracking it down to a source, such as perhaps a major philosopher espousing the view or at least describing it coherently. No such luck. Take a look at that list of references on its wikipedia page; two are to a single book (written in 1959 by some guy I've never heard of — have you? — and the shortness whose wikipedia page suggests that he wasn't anyone interesting), and one is to a barely-related page that mentions the thing once, in passing, by a different name. I'm not convinced. As best I can tell, it's a label that some modern-day historians of philosophy have used to describe... a not-quite-consistent family of views. (Divine command theory, for one.)

But let's attempt to take it at face value. You say:

Someone could be an ethical subjectivist and say that utilitarianism is the theory that best describes their particular attitudes, or at least that subset of their attitudes that they endorse.

Very well. Are their attitudes correct, do they think? If they say there's no fact of the matter about that, then they're not a utilitiarian. "Utilitiarianism" is a quite established term in the literature. You can't just apply it to any old thing.

Of course, this is Lesswrong; we don't argue about definitions; we're interested in what people actually think. However in this case I think getting our terms straight is important, for two reasons:

  1. When most people say they're utilitarians, they mean it in the usual sense, I think. So to understand what's going on in these discussions, and in the heads of the people we're talking to, we need to know what is the usual sense.

  2. If you hold some view which is not one of the usual views with commonly-known terms, you shouldn't call it by one of the commonly-known terms, because then I won't have any idea what you're talking about and we'll keep getting into comment threads like this one.

Comment author: tog 05 May 2014 05:54:26PM 1 point [-]

Ditto for Vaniver and Said.

Comment author: SaidAchmiz 07 May 2014 07:36:50AM 0 points [-]

Out of curiosity, what was your reason for asking about my ethical views in detail? I did somewhat enjoy writing out that comment, but I'm curious as to whether you were planning to go somewhere with this.

View more: Next