A common response (although, one I cannot find an example of via the search feature, blah) I have observed from Less Wrongers to the challenge of interpersonal utility comparison is the claim that "we do it all the time". I take this to mean that when we make decisions we often consider the preferences of our friends and family (and sometimes strangers or enemies) and that whatever is going on in our minds when we do this approximates interpersonal utility calculations (in some objective sense). This, to me, seems like legerdemain for basically this reason:
...One stand restoring to utilitarianism its role of judging policy, is that interpersonal comparisons are obviously possible since we are making them all the time. Only if we denied "other minds" could we rule out comparisons between them. Everyday linguistic usage proves the logical legitimacy of such statements as "A is happier than B" (level-comparison) and, at a pinch, presumably also "A is happier than B but by less than B is happier than C" (difference-comparison). A degree of freedom is, however, left to interpretation, which vitiates this approach. For these everyday statements can,
The other day, I forgot my eyeglasses at home and while walking I got a good sized piece of dust or dirt lodged in my eye. My eye was incapacitated for the better part of a minute until tears washed it out. I had a bit of an epiphany: 3^^^3 dust specks suddenly seems a lot scarier, something you obviously need to agregate and assign a monstrous pile of disutility to. So Basically I have updated my position on torture vs specks.
I have a pill that will make you a psychopath. You will retain all your intellectual abilities and all understanding of moral theory, but your emotional reactions to others suffering will cease. You will still have the empathy to understand that others are suffering, but you won't feel automatic sympathy for it.
Do you want to take it?
I am having a discussion on reddit (I am TheMeiguoren), and I have a a moral quandry that I want to run by the community.
I'll highlight the main point (the context is a discussion about immortality):
...imbecile: For someone to have several lifetimes to be considered a good thing, it must be conclusively shown that this person improves the life of others more and faster than several other people could achieve in their lifetime together with the resources he has at his disposal.
me: If my existence really was harming the human race by not being as efficient a
I have a question: what is akrasia exactly?
Say I have to finish a paper, but I also enjoy wasting time on the internet. All things considered, I decide it would be better for me to finish the paper than for me to waste time on the internet. And yet I waste time on the internet. What's going on there? It can't just be a reflex or a tick: my reflexes aren't that sophisticated. Given how complicated wasting time on the internet is, and that I decidedly enjoy it, it looks like an intentional action, something which is the result of my reasoning. Yet I reasoned...
Summary: I'm wondering whether anyone (especially moral anti-realists) would disagree with the statement, "The utility of an agent can only depend on the mental state of that agent".
I have had little success In my attempts to devise a coherent moral realist theory of meta-ethics, and am no longer very sure that moral realism is true, but there is one statement about morality that seems clearly true to me. "The utility of an agent can only depend on the mental state of that agent". Call this statement S. By utility I roughly mean how goo...
Love - in increase in her utility causes an increase in your utility.
Hate - in increase in her utility causes a decrease in your utility.
Indifference - a change in her utility has no influence on your utility.
Love = good.
Hate = evil.
Indifference = how almost everyone feels towards almost everyone.
[...]and related-to-rationality enough to deserve its own thread.
I've gotten to thinking that morality and rationality are very, very isomorphic. The former seems to require the latter, and in my experience the latter gives rise to the former. So they may not even be completely distinguishable. We've got lots of commonalities between the two, noting that both are very difficult for humans due to our haphazard makeup, and both have imaginary Ideal versions (respectively: God, and the agent who only has true beliefs and optimal decisions and infinite comp...
Question: What is the definition of morality? What is morality? For what humans use this concept and what motivitates humans to better understand morality, whatever it is?
[...]and related-to-rationality enough to deserve its own thread.
I've gotten to thinking that morality and rationality are very, very isomorphic. The former seems to require the latter, and in my experience the latter gives rise to the former. So they may not even be completely distinguishable. We've got lots of commonalities between the two, noting that both are very difficult for humans due to our haphazard makeup, and both have imaginary Ideal versions (respectively: God, and the agent who only has true beliefs and optimal decisions and infinite computing power, and they seem to be correlated (though it is hard to say for sure), and the folk versions of both are always wrong. By which I mean when someone has an axe to grind, he will say it is moral to X, or rational to X, where really X is just what he wants, whether he is in a position of power or not. Related to that I've got a pet theory that if you take the high values of each literally, they are entirely uncontroversial, and arguments and tribalism only begin when people start making claims of what each implies, but once again I can't be sure at this juncture.
What say ye, Less Wrong?
Related to that I've got a pet theory that if you take the high values of each literally, they are entirely uncontroversial
My sense is that this assertion can be empirically falsified for all levels of abstraction below "Do what is right."
But in a particular society or sub-culture, more specific assertions can be uncontroversial - in an unhelpful in solving any problems kind of way. That was what I took away from Applause lights.
I figure morality as a topic is popular enough and important enough and related-to-rationality enough to deserve its own thread.
Questions, comments, rants, links, whatever are all welcome. If you're like me you've probably been aching to share your ten paragraph take on meta-ethics or whatever for about three uncountable eons now. Here's your chance.
I recommend reading Wikipedia's article on meta-ethics before jumping into the fray, if only to get familiar with the standard terminology. The standard terminology is often abused. This makes some people sad. Please don't make those people sad.