I'm pretty intimidated about posting, since you guys all seem so smart and knowledgeable, but here goes.
So I take the quality of this post along with this statement to indicate LW is not being friendly enough. I think we're currently losing more than we're gaining by being discouraging to newbies and lurkers. One suggestion I have that is probably unrealistic would be integrating the LW chatroom into the main site somehow so that people could get to it via a link from the front page. Chat feels like much less of a commitment than posting to even an open thread.
OP: good post. Don't worry about not being "up to snuff."
Welcome to LessWrong, and thanks for posting!
Regarding the evolution of emotions, consider this:
Imagine a group of life forms of the same species who compete for resources. Lets say that either they are fairly even in power level, and thus it is superior for them to cooperate with each other and divide resources fairly to avoid wasting energy fighting. Alternately, some (alphas) are superior in power level, but the game theoretically optimal outcome is for the more dominant to take a larger share of resources, but still allow the others to have some. (T...
I think you are over-optimistic about human goodness. If you had to deconvert at all it is possible you are from a culture where Christian morals still go strong amongst atheists. (Comparison: I do not remember any family members with a personal belief, I do remember some great-grandmas who went to church because it was a social expectation, but I think they never really felt entitled to form a personal belief or deny it, it was more obedience in observation than faith.) These kinds of habits don't die too quickly, in fact they can take centuries - ther...
Good post. I think you are thinking about morality correctly, and I share your feelings about the sentiments behind virtue ethics and consequentialism not being particularly dissimilar or totally incompatible.
...Um... so my high school Calculus teacher, who is lots, lots smarter than I am, thinks "emotion" is evidence of intelligent design. I thought this was just a "god of the gaps" thing, but maybe it is really the simplest explanation. I think most people here have ruled out an omnipotent, omni-benevolent God... but maybe life on eart
We want to do things that make ourselves happy, and we want to do things that make others happy.
One way to test whether we all want to do things that make others happy is to read a book or two. Try "Human Smoke" by Nicholson Baker, for instance. Another test would be to spend part of a day in prison or a mental hospital. But the most direct means I found to disabuse myself of the idea we all want to do things that make others happy is to meet more people. Having met more people, I am now more appreciative of the not-all people who not-all o...
Part 2
One ultimate psychological motivation can trump even goodness, and that's the second terminal virtue: personal happiness.
If goodness was a terminal virtue, then how could it ever be trumped by anything? Actually, I think there's an answer to this. To me, being a terminal virtue seems to mean that you value it regardless of whether it leads to anything else. Contrast this with "I value X only to the extent that it leads to Y". But if you have more than one terminal virtue, it seems to follow that you'd have to choose which one you value...
Part 1
This is probably the first "philosophical" thought I've had in my life
Haha, good one. Humor is often a good way to open :)
happy
I assume you mean "desirability of mind-state". People associate the word "happy" with a lot of different things, so I think it's worth giving some sort of operational definition (could probably be informal though).
So I suspect a certain commonality among human beings in that we all actually share the same terminal values, or terminal virtues.
I think a quick primer on consequentialism...
Why did the alien-god give us emotions?
The alien-god does not act rationally.... The origin of emotion ultimately seems like the result of random chance.
Emotions are likely as useless as other things the alien-god gave us, things like eyes and livers and kidneys and sex-drives and fight-or-flight responses.
Emotions appear to drive social cooperation at least between mammals. Human partnership with dogs is mediated and cemented by emotions, I think only someone who has spent no time with dogs could disagree with this observation. Emotions and th...
Hey els, thanks for posting your thoughts. It'd be nice if you put a summary in the first paragraph seeing as the article is so long.
Welcome to LessWrong! I wouldn't comment if I didn't like your post and think it was worth responding to, so please don't interpret my disagreement as negative feedback. I appreciate your post, and it got me thinking. That said. I disagree with you.
The real point I'm making here is that however we categorize personal happiness, goodness belongs in the same category, because in practice, all other goals seem to stem from one or both of these concepts.
Your claims are probably much closer to true for some people than they are for me, but they are far fro...
I haven't read much about ethics on LessWrong or in general, but I recall reading, in passing, things to the effect of, "Consequentialism (more specifically, utilitarianism) and deontology are not necessarily incompatible." I always assumed that they were reconciled by taking utility maximization as the ideal case and deontology as the tractable approximation. "There may be situations in which killing others results in greater utility than not killing others, and those situations may even be more common than I think, but given imperfect info...
Um... so my high school Calculus teacher, who is lots, lots smarter than I am, thinks "emotion" is evidence of intelligent design.
I have gone in a similar direction at times, finding my own consciousness to be evidence of a general place for consciousness in the universe. That is, if the machine that is me is conscious of itself in this universe, then EITHER there is a magic sky-father who pastes consciousness onto some kinds of machines but not others, at his whim OR there is a physics-of-consciousness that we observe produces consciousness ...
Hi, I'm new to LessWrong. I stumbled onto this site a month ago, and ever since, I've been devouring Rationality: AI to Zombies faster than I used to go through my favorite fantasy novels. I've spent some time on website too, and I'm pretty intimidated about posting, since you guys all seem so smart and knowledgeable, but here goes... This is probably the first intellectual idea I've had in my life, so if you want to tear it to shreds, you are more than welcome to, but please be gentle with my feelings. :)
Edit: Thanks to many helpful comments, I've cleaned up the original post quite a bit and changed the title to reflect this.
Ends-in-themselves
As humans, we seem to share the same terminal values, or terminal virtues. We want to do things that make ourselves happy, and we want to do things that make others happy. We want to 'become happy' and 'become good.'
Because various determinants--including, for instance, personal fulfillment--can affect an individual's happiness, there is significant overlap between these ultimate motivators. Doing good for others usually brings us happiness. For example, donating to charity makes people feel warm and fuzzy. Some might recognize this overlap and conclude that all humans are entirely selfish, that even those who appear altruistic are subconsciously acting purely out of self-interest. Yet many of us choose to donate to charities that we believe do the most good per dollar, rather than handing out money through personal-happiness-optimizing random acts of kindness. Seemingly rational human beings sometimes make conscious decisions to inefficiently maximize their personal happiness for the sake of others. Consider Eliezer's example in Terminal Values and Instrumental Values of a mother who sacrifices her life for her son.
Why would people do stuff that they know won't efficiently increase their happiness? Before I de-converted from Christianity and started to learn what evolution and natural selection actually were, before I realized that altruistic tendencies are partially genetic, it used to utterly mystify me that atheists would sometimes act so virtuously. I did believe that God gave them a conscience, but I kinda thought that surely someone rational enough to become an atheist would be rational enough to realize that his conscience didn't always lead him to his optimal mind-state, and work to overcome it. Personally, I used to joke with my friends that Christianity was the only thing stopping me from pursuing my true dream job of becoming a thief (strategy + challenge + adrenaline + variety = what more could I ask for?) Then, when I de-converted, it hit me: Hey, you know, Ellen, you really *could* become a thief now! What fun you could have! I flinched from the thought. Why didn't I want to overcome my conscience, become a thief, and live a fun-filled life? Well, this isn't as baffling to me now, simply because I've changed where I draw the boundary. I've come to classify goodness as an end-in-itself, just like I'd always done with happiness.
Becoming good
I first read about virtue ethics in On Terminal Goals and Virtue Ethics. As I read, I couldn't help but want to be a virtue ethicist and a consequentialist. Most virtues just seemed like instrumental values.
The post's author mentioned Divergent protagonist Tris as an example of virtue ethics:
I suspect that goodness is, perhaps subconsciously, a terminal virtue for the vast majority of virtue ethicists. I appreciate Oscar Wilde's writing in De Profundis:
Wilde's thoughts on humility translate quite nicely to an innate desire for goodness.
When presented with a conflict between an elected virtue, such as loyalty, or truth, and the underlying desire to be good, most virtue ethicists would likely abandon the elected virtue. With truth, consider the classic example of lying to Nazis to save Jews. Generally speaking, it is wrong to conceal the truth, but in special cases, most people would agree that lying is actually less wrong than truth-telling. I'm not certain, but my hunch is that most professing virtue ethicists would find that in extreme thought experiments, their terminal virtue of goodness would eventually trump their other virtues, too.
Becoming happy
However, there's one exception. One desire can sometimes trump even the desire for goodness, and that's the desire for personal happiness.
We usually want what makes us happy. I want what makes me happy. Spending time with family makes me happy. Playing board games makes me happy. Going hiking makes me happy. Winning races makes me happy. Being open-minded makes me happy. Hearing praise makes me happy. Learning new things makes me happy. Thinking strategically makes me happy. Playing touch football with friends makes me happy. Sharing ideas makes me happy. Independence makes me happy. Adventure makes me happy. Even divulging personal information makes me happy.
Fun, accomplishment, positive self-image, sense of security, and others' approval: all of these are examples of happiness contributors, or things that lead me to my own, personal optimal mind-state. Every time I engage in one of the happiness increasers above, I'm fulfilling an instrumental value. I'm doing the same thing when I reject activities I dislike or work to reverse personality traits that I think decrease my overall happiness.
Tris was, in other words, pursuing happiness by trying to change an aspect of her personality she disliked.
Guessing at subconscious motivation
By now, you might be wondering, "But what about the virtue ethicist who is religious? Wouldn't she be ultimately motivated by something other than happiness and goodness?"
Well, in the case of Christianity, most people probably just want to 'become Christ-like' which, for them, overlaps quite conveniently with personal satisfaction and helping others. Happiness and goodness might be intuitively driving them to choose this instrumental goal, and for them, conflict between the two never seems to arise.
Let's consider 'become obedient to God's will' from a modern-day Christian perspective. 1 Timothy 2:4 says, "[God our Savior] wants all men to be saved and to come to a knowledge of the truth." Mark 12:31 says, "Love your neighbor as yourself." Well, I love myself enough that I want to do everything in my power to avoid eternal punishment; therefore, I should love my neighbor enough to do everything in my power to stop him from going to hell, too.
So anytime a Christian does anything but pray for others, do faith-strengthening activities, spread the gospel, or earn money to donate to missionaries, he is anticipating as if God/hell doesn't exist. As a Christian, I totally realized this, and often tried to convince myself and others that we were acting wrongly by not being more devout. I couldn't shake the notion that spending time having fun instead of praying or sharing the gospel was somehow wrong because it went against God's will of wanting all men being saved, and I believed God's will, by definition, was right. (Oops.) But I still acted in accordance with my personal happiness on many occasions. I said God's will was the only end-in-itself, but I didn't act like it. I didn't feel like it. The innate desire to pursue personal happiness is an extremely strong motivating force, so strong that Christians really don't like to label it as sin. Imagine how many deconversions we would see if it were suddenly sinful to play football, watch movies with your family, or splurge on tasty restaurant meals. Yet the Bible often mentions giving up material wealth entirely, and in Luke 9:23 Jesus says, "Whoever wants to be my disciple must deny themselves and take up their cross daily and follow me."
Let's further consider those who believe God's will is good, by definition. Such Christians tend to believe "God wants what's best for us, even when we don't understand it." Unless they have exceptionally strong tendencies to analyze opportunity costs, their understanding of God's will and their intuitive idea of what's best for humanity rarely conflict. But let's imagine it does. Let's say someone strongly believes in God, and is led to believe that God wants him to sacrifice his child. This action would certainly go against his terminal value of goodness and may cause cognitive dissonance. But he could still do it, subconsciously satisfying his (latent) terminal value of personal happiness. What on earth does personal happiness have to do with sacrificing a child? Well, the believer takes comfort in his belief in God and his hope of heaven (the child gets a shortcut there). He takes comfort in his religious community. To not sacrifice the child would be to deny God and lose that immense source of comfort.
These thoughts obviously don't happen on a conscious level, but maybe people have personal-happiness-optimizing intuitions. Of course, I have near-zero scientific knowledge, no clue what really goes on in the subconscious, and I'm just guessing at all this.
Individual variance
Again, happiness has a huge overlap with goodness. Goodness often, but not always, leads to personal happiness. A lot of seemingly random stuff leads to personal happiness, actually. Whatever that stuff is, it largely accounts for the individual variance in which virtues are pursued. It's probably closely tied to the four Kiersey Temperaments of security-seeking, sensation-seeking, knowledge-seeking, and identity-seeking types. (Unsurprisingly, most people here at LW reported knowledge-seeking personality types.) I'm a sensation-seeker. An identity-seeker could find his identity in the religious community and in being a 'child of God'. A security-seeker could find security in his belief in heaven. An identity-seeking rationalist might be the type most likely to aspire to 'become completely truthful' even if she somehow knew with complete certainty that telling the truth, in a certain situation, would lead to a bad outcome for humanity.
Perhaps the general tendency among professing virtue ethicists is to pursue happiness and goodness relatively intuitively, while professing consequentialists pursue the same values more analytically.
Also worth noting is the individual variance in someone's "preference ratio" of happiness relative to goodness. Among professing consequentialists, we might find sociopaths and extreme altruists at opposite ends of a happiness-goodness continuum, with most of us falling somewhere in between. To position virtue ethicists on such a continuum would be significantly more difficult, requiring further speculation about subconscious motivation.
Real-life convergence of moral views
I immediately identified with consequentialism when I first read about it. Then I read about virtue ethics, and I immediately identified with that, too. I naturally analyze my actions with my goals in mind. But I also often find myself idolizing a certain trait in others, such as environmental consciousness, and then pursuing that trait on my own. For example:
I've had friends who care a lot about the environment. I think it's cool that they do. So even before hearing about virtue ethics, I wanted to 'become someone who cares about the environment'. Subconsciously, I must have suspected that this would help me achieve my terminal goals of happiness and goodness.
If caring about the environment is my instrumental goal, I can feel good about myself when I instinctively pick up trash, conserve energy, use a reusable water bottle; i.e. do things environmentally conscious people do. It's quick, it's efficient, and having labeled 'caring about the environment' as a personal virtue, I'm spared from analyzing every last decision. Being environmentally conscious is a valuable habit.
Yet I can still do opportunity cost analyses with my chosen virtue. For example, I could stop showering to help conserve California's water. Or, I could apparently have the same effect by eating six fewer hamburgers in a year. More goodness would result if I stopped eating meat and limited my showering, but doing so would interfere with my personal happiness. I naturally seek to balance my terminal goals of goodness and happiness. Personally, I prefer showering to eating hamburgers, so I cut significantly back on my meat consumption without worrying too much about my showering habits. This practical convergence of virtue ethics and consequentialism satisfies my desires for happiness and goodness harmoniously.
To summarize:
Personal happiness refers to an individual's optimal mind-state. Pleasure, pain, and personal satisfaction are examples of happiness level determinants. Goodness refers to promoting happiness in others.
Terminal values are ends-in-themselves. The only true terminal values, or virtues, seem to be happiness and goodness. Think of them as psychological motivators, consciously or subconsciously driving us to make the decisions we do. (Physical motivators, like addiction or inertia, can also affect decisions.)
Preferences are what we tend to choose. These can be based on psychological or physical motivators.
Instrumental values are the sub-goals or sub-virtues that we (consciously or subconsciously) believe will best fulfill our terminal values of happiness and goodness. We seem to choose them arbitrarily.
Of course, we're not always aware of what actually leads to optimal mind-states in ourselves and others. Yet as we rationally pursue our goals, we may sometimes intuit like virtue ethicists and other times analyze like consequentialists. Both moral views are useful.
Practical value
So does this idea have any potential practical value?
It took some friendly prodding, but I was finally brought to realize that my purpose in writing this article was not to argue the existence of goodness or the theoretical equality of consequentialism and virtue ethics or anything at all. The real point I'm making here is that however we categorize personal happiness, goodness belongs in the same category, because in practice, all other goals seem to stem from one or both of these concepts. Clarity of expression is an instrumental value, so I'm just saying that perhaps we should consider redrawing our boundaries a bit:
P.S. If anyone is interested in reading a really, really long conversation I had with adamzerner, you can trace the development of this idea. Language issues were overcome, biases were admitted, new facts were learned, minds were changed, and discussion bounced from ambition, to serial killers, to arrogance, to religion, to the subconscious, to agenthood, to skepticism about the happiness set-point theory, all interconnected somehow. In short, it was the first time I've had a conversation with a fellow "rationalist" and it was one of the coolest experiences I've ever had.