Hi, I'm new to LessWrong. I stumbled onto this site a month ago, and ever since, I've been devouring Rationality: AI to Zombies faster than I used to go through my favorite fantasy novels. I've spent some time on website too, and I'm pretty intimidated about posting, since you guys all seem so smart and knowledgeable, but here goes... This is probably the first intellectual idea I've had in my life, so if you want to tear it to shreds, you are more than welcome to, but please be gentle with my feelings. :)
Edit: Thanks to many helpful comments, I've cleaned up the original post quite a bit and changed the title to reflect this. 

Ends-in-themselves

As humans, we seem to share the same terminal values, or terminal virtues. We want to do things that make ourselves happy, and we want to do things that make others happy. We want to 'become happy' and 'become good.' 

Because various determinants--including, for instance, personal fulfillment--can affect an individual's happiness, there is significant overlap between these ultimate motivators. Doing good for others usually brings us happiness. For example, donating to charity makes people feel warm and fuzzy. Some might recognize this overlap and conclude that all humans are entirely selfish, that even those who appear altruistic are subconsciously acting purely out of self-interest. Yet many of us choose to donate to charities that we believe do the most good per dollar, rather than handing out money through personal-happiness-optimizing random acts of kindness. Seemingly rational human beings sometimes make conscious decisions to inefficiently maximize their personal happiness for the sake of others. Consider Eliezer's example in Terminal Values and Instrumental Values of a mother who sacrifices her life for her son. 

Why would people do stuff that they know won't efficiently increase their happiness? Before I de-converted from Christianity and started to learn what evolution and natural selection actually were, before I realized that altruistic tendencies are partially genetic, it used to utterly mystify me that atheists would sometimes act so virtuously. I did believe that God gave them a conscience, but I kinda thought that surely someone rational enough to become an atheist would be rational enough to realize that his conscience didn't always lead him to his optimal mind-state, and work to overcome it. Personally, I used to joke with my friends that Christianity was the only thing stopping me from pursuing my true dream job of becoming a thief (strategy + challenge + adrenaline + variety = what more could I ask for?) Then, when I de-converted, it hit me: Hey, you know, Ellen, you really *could* become a thief now! What fun you could have!flinched from the thought. Why didn't I want to overcome my conscience, become a thief, and live a fun-filled life? Well, this isn't as baffling to me now, simply because I've changed where I draw the boundary. I've come to classify goodness as an end-in-itself, just like I'd always done with happiness. 

Becoming good

I first read about virtue ethics in On Terminal Goals and Virtue Ethics. As I read, I couldn't help but want to be a virtue ethicist and a consequentialist. Most virtues just seemed like instrumental values.

The post's author mentioned Divergent protagonist Tris as an example of virtue ethics:

Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’

I suspect that goodness is, perhaps subconsciously, a terminal virtue for the vast majority of virtue ethicists. I appreciate Oscar Wilde's writing in De Profundis:

Now I find hidden somewhere away in my nature something that tells me that nothing in the whole world is meaningless, and suffering least of all.. 

It is the last thing left in me, and the best: the ultimate discovery at which I have arrived, the starting-point for a fresh development. It has come to me right out of myself, so I know that it has come at the proper time. It could not have come before, nor later. Had anyone told me of it, I would have rejected it. Had it been brought to me, I would have refused it. As I found it, I want to keep it. I must do so...

Of all things it is the strangest.

Wilde's thoughts on humility translate quite nicely to an innate desire for goodness.

When presented with a conflict between an elected virtue, such as loyalty, or truth, and the underlying desire to be good, most virtue ethicists would likely abandon the elected virtue. With truth, consider the classic example of lying to Nazis to save Jews. Generally speaking, it is wrong to conceal the truth, but in special cases, most people would agree that lying is actually less wrong than truth-telling. I'm not certain, but my hunch is that most professing virtue ethicists would find that in extreme thought experiments, their terminal virtue of goodness would eventually trump their other virtues, too. 

Becoming happy

However, there's one exception. One desire can sometimes trump even the desire for goodness, and that's the desire for personal happiness. 

We usually want what makes us happy. I want what makes me happy. Spending time with family makes me happy. Playing board games makes me happy. Going hiking makes me happy. Winning races makes me happy. Being open-minded makes me happy. Hearing praise makes me happy. Learning new things makes me happy. Thinking strategically makes me happy. Playing touch football with friends makes me happy. Sharing ideas makes me happy. Independence makes me happy. Adventure makes me happy. Even divulging personal information makes me happy.

Fun, accomplishment, positive self-image, sense of security, and others' approval: all of these are examples of happiness contributors, or things that lead me to my own, personal optimal mind-state. Every time I engage in one of the happiness increasers above, I'm fulfilling an instrumental value. I'm doing the same thing when I reject activities I dislike or work to reverse personality traits that I think decrease my overall happiness.

Tris didn’t join the Dauntless cast because she thought they were doing the most good in society, or because she thought her comparative advantage to do good lay there–she chose it because they were brave, and she wasn’t, yet, and she wanted to be.

Tris was, in other words, pursuing happiness by trying to change an aspect of her personality she disliked.

Guessing at subconscious motivation

By now, you might be wondering, "But what about the virtue ethicist who is religious? Wouldn't she be ultimately motivated by something other than happiness and goodness?" 

Well, in the case of Christianity, most people probably just want to 'become Christ-like' which, for them, overlaps quite conveniently with personal satisfaction and helping others. Happiness and goodness might be intuitively driving them to choose this instrumental goal, and for them, conflict between the two never seems to arise. 

Let's consider 'become obedient to God's will' from a modern-day Christian perspective. 1 Timothy 2:4 says, "[God our Savior] wants all men to be saved and to come to a knowledge of the truth." Mark 12:31 says, "Love your neighbor as yourself." Well, I love myself enough that I want to do everything in my power to avoid eternal punishment; therefore, I should love my neighbor enough to do everything in my power to stop him from going to hell, too.

So anytime a Christian does anything but pray for others, do faith-strengthening activities, spread the gospel, or earn money to donate to missionaries, he is anticipating as if God/hell doesn't exist. As a Christian, I totally realized this, and often tried to convince myself and others that we were acting wrongly by not being more devout. I couldn't shake the notion that spending time having fun instead of praying or sharing the gospel was somehow wrong because it went against God's will of wanting all men being saved, and I believed God's will, by definition, was right. (Oops.) But I still acted in accordance with my personal happiness on many occasions. I said God's will was the only end-in-itself, but I didn't act like it. I didn't feel like it. The innate desire to pursue personal happiness is an extremely strong motivating force, so strong that Christians really don't like to label it as sin. Imagine how many deconversions we would see if it were suddenly sinful to play football, watch movies with your family, or splurge on tasty restaurant meals. Yet the Bible often mentions giving up material wealth entirely, and in Luke 9:23 Jesus says, "Whoever wants to be my disciple must deny themselves and take up their cross daily and follow me."

Let's further consider those who believe God's will is good, by definition. Such Christians tend to believe "God wants what's best for us, even when we don't understand it." Unless they have exceptionally strong tendencies to analyze opportunity costs, their understanding of God's will and their intuitive idea of what's best for humanity rarely conflict. But let's imagine it does. Let's say someone strongly believes in God, and is led to believe that God wants him to sacrifice his child. This action would certainly go against his terminal value of goodness and may cause cognitive dissonance. But he could still do it, subconsciously satisfying his (latent) terminal value of personal happiness. What on earth does personal happiness have to do with sacrificing a child? Well, the believer takes  comfort in his belief in God and his hope of heaven (the child gets a shortcut there). He takes comfort in his religious community. To not sacrifice the child would be to deny God and lose that immense source of comfort. 

These thoughts obviously don't happen on a conscious level, but maybe people have personal-happiness-optimizing intuitions. Of course, I have near-zero scientific knowledge, no clue what really goes on in the subconscious, and I'm just guessing at all this.

Individual variance

Again, happiness has a huge overlap with goodness. Goodness often, but not always, leads to personal happiness. A lot of seemingly random stuff leads to personal happiness, actually. Whatever that stuff is, it largely accounts for the individual variance in which virtues are pursued. It's probably closely tied to the four Kiersey Temperaments of security-seeking, sensation-seeking, knowledge-seeking, and identity-seeking types. (Unsurprisingly, most people here at LW reported knowledge-seeking personality types.) I'm a sensation-seeker. An identity-seeker could find his identity in the religious community and in being a 'child of God'. A security-seeker could find security in his belief in heaven. An identity-seeking rationalist might be the type most likely to aspire to 'become completely truthful' even if she somehow knew with complete certainty that telling the truth, in a certain situation, would lead to a bad outcome for humanity.

Perhaps the general tendency among professing virtue ethicists is to pursue happiness and goodness relatively intuitively, while professing consequentialists pursue the same values more analytically.

Also worth noting is the individual variance in someone's "preference ratio" of happiness relative to goodness. Among professing consequentialists, we might find sociopaths and extreme altruists at opposite ends of a happiness-goodness continuum, with most of us falling somewhere in between. To position virtue ethicists on such a continuum would be significantly more difficult, requiring further speculation about subconscious motivation.

Real-life convergence of moral views

I immediately identified with consequentialism when I first read about it. Then I read about virtue ethics, and I immediately identified with that, too. I naturally analyze my actions with my goals in mind. But I also often find myself idolizing a certain trait in others, such as environmental consciousness, and then pursuing that trait on my own. For example:

I've had friends who care a lot about the environment. I think it's cool that they do. So even before hearing about virtue ethics, I wanted to 'become someone who cares about the environment'. Subconsciously, I must have suspected that this would help me achieve my terminal goals of happiness and goodness.

If caring about the environment is my instrumental goal, I can feel good about myself when I instinctively pick up trash, conserve energy, use a reusable water bottle; i.e. do things environmentally conscious people do. It's quick, it's efficient, and having labeled 'caring about the environment' as a personal virtue, I'm spared from analyzing every last decision. Being environmentally conscious is a valuable habit.

Yet I can still do opportunity cost analyses with my chosen virtue. For example, I could stop showering to help conserve California's water. Or, I could apparently have the same effect by eating six fewer hamburgers in a year. More goodness would result if I stopped eating meat and limited my showering, but doing so would interfere with my personal happiness. I naturally seek to balance my terminal goals of goodness and happiness. Personally, I prefer showering to eating hamburgers, so I cut significantly back on my meat consumption without worrying too much about my showering habits. This practical convergence of virtue ethics and consequentialism satisfies my desires for happiness and goodness harmoniously.


To summarize:

Personal happiness refers to an individual's optimal mind-state. Pleasure, pain, and personal satisfaction are examples of happiness level determinants. Goodness refers to promoting happiness in others.

Terminal values are ends-in-themselves. The only true terminal values, or virtues, seem to be happiness and goodness. Think of them as psychological motivators, consciously or subconsciously driving us to make the decisions we do. (Physical motivators, like addiction or inertia, can also affect decisions.)

Preferences are what we tend to choose. These can be based on psychological or physical motivators.

Instrumental values are the sub-goals or sub-virtues that we (consciously or subconsciously) believe will best fulfill our terminal values of happiness and goodness. We seem to choose them arbitrarily.

Of course, we're not always aware of what actually leads to optimal mind-states in ourselves and others. Yet as we rationally pursue our goals, we may sometimes intuit like virtue ethicists and other times analyze like consequentialists. Both moral views are useful.

Practical value

So does this idea have any potential practical value? 

It took some friendly prodding, but I was finally brought to realize that my purpose in writing this article was not to argue the existence of goodness or the theoretical equality of consequentialism and virtue ethics or anything at all. The real point I'm making here is that however we categorize personal happiness, goodness belongs in the same category, because in practice, all other goals seem to stem from one or both of these concepts. Clarity of expression is an instrumental value, so I'm just saying that perhaps we should consider redrawing our boundaries a bit:

Figuring where to cut reality in order to carve along the joints—this is the problem worthy of a rationalist.  It is what people should be trying to do, when they set out in search of the floating essence of a word.

P.S. If anyone is interested in reading a really, really long conversation I had with adamzerner, you can trace the development of this idea. Language issues were overcome, biases were admitted, new facts were learned, minds were changed, and discussion bounced from ambition, to serial killers, to arrogance, to religion, to the subconscious, to agenthood, to skepticism about the happiness set-point theory, all interconnected somehow. In short, it was the first time I've had a conversation with a fellow "rationalist" and it was one of the coolest experiences I've ever had.

New Comment
67 comments, sorted by Click to highlight new comments since: Today at 12:45 AM

I'm pretty intimidated about posting, since you guys all seem so smart and knowledgeable, but here goes.

So I take the quality of this post along with this statement to indicate LW is not being friendly enough. I think we're currently losing more than we're gaining by being discouraging to newbies and lurkers. One suggestion I have that is probably unrealistic would be integrating the LW chatroom into the main site somehow so that people could get to it via a link from the front page. Chat feels like much less of a commitment than posting to even an open thread.

OP: good post. Don't worry about not being "up to snuff."

[-][anonymous]9y20

Thanks :)

I'll just say the intimidation factor for me stems more from my own utter lack of scientific knowledge than any unfriendliness on your guys' part. A visible chat room would definitely be a nice feature though!

A visible chat room would definitely be a nice feature though!

I do not use it, but I found these links in wiki:

Yes, but that is many clicks and button presses away. Making it a one click process would funnel lots of lurkers there where they could ask questions or talk about posts in a less permanent way.

ABSOLUTELY!

In general, I think the bar for posts in Discussion is way too high. After all, if you want to have a discussion, the only thing you really need is a question.

Edit: Obviously there's value in starting off a discussion with something more than just a question, and that all else equal, you'd prefer to start the discussion with more rather than less. But I still get the impression that the general atmosphere is to hold Discussion posts to way too high a standard.

One suggestion I have that is probably unrealistic would be integrating the LW chatroom

Agreed. I hope to do some work on the site in the next year or so (I'm not a good enough developer yet).

Welcome to LessWrong, and thanks for posting!

Regarding the evolution of emotions, consider this:

Imagine a group of life forms of the same species who compete for resources. Lets say that either they are fairly even in power level, and thus it is superior for them to cooperate with each other and divide resources fairly to avoid wasting energy fighting. Alternately, some (alphas) are superior in power level, but the game theoretically optimal outcome is for the more dominant to take a larger share of resources, but still allow the others to have some. (This is superior for them to fighting to the death to try to get everything).

If life forms cannot communicate with each other, then they suffer from a prisoner's dilemma problem. They would like to cooperate in the prisoners dilemma, but without the ability to signal to each other, they cannot be sure the other will not defect. Thus they end up defecting against each other.

We can thus see that if the life forms evolved methods of signalling to each other, it would improve their chances of survival. Thus, complex life forms develop ways of signalling to each other. We can see many, many examples of this throughout a broad range of life.

The life forms also need to be able to mentally model each other now to predict each other's actions. Thus they develop empathy (the ability to model each other), and emotions, which are related to signalling. If each of the members of the species have similar emotions, that is , they react in similar ways to a given situation, and they have developed ways of expressing these emotions to each other, then it greatly improves the abilities of the life forms to model each other correctly. For example, if one life form then tries to gain an unfair distribution of resources (defecting in the prisoners dilemma problem), the other will display the emotional response of anger. They are signalling 'you cannot defect against me in such an unfair way. Because you are attempting to do so, I will fight you'.

Because the emotional responses occur automatically, they act similar to a precommitment. Essentially, the life form having the emotional response has been preprogrammed to have this response to the situation. This is a precommitment to a course of action, which can help the life form to achieve a better game-theoretic result.

(For example, if we are playing 'chicken', driving our cars toward each other on a road, the best strategy for me to win the game is to visibly precommit to not swerving no matter what. Thus, the optimal strategy would be to remove my steering wheel and throw it out of the car in a way that you can see. Since you now know that I CANNOT turn, you must turn to avoid crashing, and I win the game. This shows how a strong precommitment can be an advantage).

If we think of emotions as precommitments in this way, we can see that they can give us an advantage in our prisoner's dilemma problem. The opponent then knows that they cannot defect against us too much, or else we will become angry and will fight, even though this gives a worse outcome for us as well, it is an emotional response and thus we will do it automatically.

We can see that emotions are thus an aid to fitness, and life forms that evolve them will have a genetic advantage.

Now imagine that the problems that the life forms need to signal about become much more complex. Rather than just signalling about food or mates, for example, they need to signal about group dynamics, concepts like loyalty to a tribe, their commitment to care for young, etc.

Under these circumstances, we can see the need to develop a much greater range of emotions to deal with various situations. We need to display more than just anger over an unequal distribution, or fear, or so on. We also need to have emotional responses of love, loyalty, and so on, and be able to demonstrate/signal these to each other.

This is my understanding of how emotions should evolve among complex life forms that are at least remotely close to humans. Perhaps for extremely different life forms, emotions would not evolve. Or perhaps in pretty much all complex forms of life which are communicating with each other, some form of emotions would evolve, I don't know. I don't see the need for a magical sky-father to have desired emotions to exist, who then guided reality to this point.

Also, I am not sure how the universe-creator would accomplish this. If the degrees of freedom they have are to create the underlying fundamental laws of physics, and the initial conditions of the universe, how would they be able to compute out what sets of laws/initial conditions would lead to the physics which would lead to the chemistry which would lead to the biology which would lead to the life forms which would have these emotions? And why would that be the goal of the universe-creator? In the absence of evidence which would distinguish this hypothesis from others, I don't see why we should privilege this hypothesis to such an extent, when it is pretty clear that the real reason for "believing" in it is actually "I want it to be true". (And also "A strong meme which many people believe in is threatening that I will be harmed if I do not "believe" that this is true, and promising to reward me if I do "believe" it is true).

If anyone can correct my thoughts regarding evolution and emotions, or can point to some studies or scientific theories which either support or refute this post, I would love to read them!

[-][anonymous]9y30

Thanks for the welcome, and thanks for sharing your thoughts! I love game theory, and all your connections look good to me.

The life forms also need to be able to mentally model each other now to predict each other's actions. Thus they develop empathy (the ability to model each other), and emotions

This is something I still don't understand very well about evolution. They need it, and therefore they develop it? Is there anything that leads them to develop it, or is this related to the "evolving to extinction" chapter? I should go back and re-read the chapters on evolution. Is this something you can somewhat-briefly summarize, or would understanding require a lot more reading?

They need it, and therefore they develop it?

They need it, therefore if it randomly happens, they will keep the outcome.

Imagine a game where you are given random cards, and you choose which of them to keep and which of them to discard. If you need e.g. cards with high numbers, you can "develop" a high-numbered hand by keeping cards with high numbers and discarding cards with low numbers. Yet you have no control over which cards you receive. For example, if you have bad luck and always get only low numbers, you cannot "develop" a high-numbered hand. But only a few high numbers are enough to complete your goal.

Analogically, species receive random mutations. If the mutation makes the survival and reproduction of the animal more likely, the species will keep this gene. If the mutation makes the survival and reproduction of the animal less likely, the species will discard this gene. -- This is a huge simplification, of course. Also the whole process is probabilistic; you may receive a very lucky mutation and yet die for some completely unrelated reason, which means your species cannot keep that gene. Also, which genes provide advantage, that depends on the environment, and the environment is changing. Etc.

But the idea at the core is that evolution = random mutation + natural selection. Random mutation gives you new cards; natural selection decides which cards to keep and which ones to discard.

Without mutations, there would be no new cards in the game; each species would evolve to some final form and remain such forever. Without natural selection, all changes would be random, and since most mutations are harmful, the species would go extinct (although this is a contradiction in terms, because if you can die as a result of your genes, then you already have some form of natural selection: selecting for survival of those who do not have the lethal genes).

Sometimes there are many possible solution for one problem. For example, if you need to pick fruit that is very high on the trees (or more precisely speaking: if there is a fruit very high on the trees that no one is picking yet, so anyone who could do so, would get a big advantage), here are things that could help: a longer neck, longer legs, ability to jump, ability to climb trees, ability to fly, maybe even ability to tumble the trees. When you randomly get any card in this set (and it doesn't come with big disadvantages which would make it a net loss), you keep it. Some species went one way, other species went other way. -- A huge simplification again, since you cannot get an ability to fly in one step. You probably only get an ability to climb a little bit, or to jump a little bit. And in the next step, you can get ability to climb a little bit more, or to jump a little bit more, or to somehow stay in the air a little bit longer after you have jumped. Every single step must provide an additional advantage.

They need it, therefore if it randomly happens, they will keep the outcome.

Yes this. Of course it is not a given that something that would be a useful adaptation will develop randomly.

Great analogies with the hand of cards.

[-][anonymous]9y00

Ditto to Ander's comment - very nice summary and analogy, many thanks :)

This is something I still don't understand very well about evolution. They need it, and therefore they develop it? Is there anything that leads them to develop it, or is this related to the "evolving to extinction" chapter?

I'm not a biologist or anything, but I think I'm competent enough to answer this question.

You'll often see biology framed in teleological terms. That is, you'll often see it framed in terms that seem to indicate that natural selection is purposeful, like a person or God (agent) would be. I'll try to reframe this explanation in non-teleological terms. Animal husbandry/selective breeding/artificial selection is a good way to get an idea of how traits become more frequent in populations, and it just seems less mysterious because it happens on shorter timescales and humans set the selection criteria.

Imagine you have some wolves. You want to breed them such that eventually you'll have a generation of wolves that will do your bidding. Some of the wolves are nicer than others, and, rightly, wolves like these are the ones that you believe will be fit to do your bidding. You notice that nice wolves are more likely to have nice parents, and that nice wolves are more likely to give birth to nice pups. So, you prevent the mean wolves from reproducing by some means, and allow the nicest wolves to mate. You re-apply your selection criterion of Niceness to the next generation, allowing only the nicest wolves to mate. Before long, the only wolves you have around are nice wolves that you have since decided to call dogs.

In artificial selection, the selection criterion is Whatever Humans Want. In natural selection, the selection criterion is reproductive fitness; the environment 'decides' (see how easy it is to fall into teleology?) what reproduces. Non-teleologically, organisms with more adaptive traits are more likely to reproduce than organisms with less adaptive traits, and therefore the frequency of those traits in the population increases with time. Rather than thinking of natural selection as 'a thing that magically develops stuff,' imagine it as a process that selects the most adaptive traits among all possible traits. So, we're not so much making traits out of thin air as we are picking the most adaptive traits out of a great many possibilities. You didn't magically imbue the wolves with niceness; niceness was a possible trait among all possible traits, and you narrowed down the possibilities by only letting the nice wolves mate, until at one point you only had nice wolves left.

Like the things that we discussed earlier in the welcome thread, teleological explanations of biology are artifacts of natural language and human psychology. Well before humans spoke of biology, we spoke of agents, and this is often reflected in our language use. As a result, it's also often much more concise to speak in teleological terms. Compare, "Life forms need to be able to mentally model each other, and thus develop modeling software," with my explanations above. Teleological explanations are also often a product of the aforementioned mental-modeling software itself, just as we have historically anthropomorphized natural phenomena as deities. Very importantly, accurate biological explanations that are framed teleologically can be reframed in non-teleological terms, as opposed to explanations that are fundamentally teleological.

Please feel free to ask questions if I didn't explain myself well. And to others, please correct me if I've made an error.

[-][anonymous]9y30

No, you explained that really well!! Everything is a lot less fuzzy now! Thank you :) I think with science, the first time I read it, it makes sense to me, but I had such a bad habit of filing scientific facts into the "things you learn in school but don't really need to remember for real life" category of my brain that now, even when I actually care about learning new information, it still takes multiple explanations, and sometimes one really good one like yours, before it really start to sink in for me.

[-][anonymous]9y40

I think you are over-optimistic about human goodness. If you had to deconvert at all it is possible you are from a culture where Christian morals still go strong amongst atheists. (Comparison: I do not remember any family members with a personal belief, I do remember some great-grandmas who went to church because it was a social expectation, but I think they never really felt entitled to form a personal belief or deny it, it was more obedience in observation than faith.) These kinds of habits don't die too quickly, in fact they can take centuries - there is the hypothesis that the individualism of capitalism came from the Protestant focus on individual salvation.

My point is that this "goodness" is probably more culturally focused than genetic. While it may be possible that if people are really careful they can keep it on and on within an atheistic culture forever, it can break down pretty easily. Christianity tends to push a certain universalism - without that, if no effort is made to stop it, things probably regress to the tribal level. We cannot really maintain universalism without effort - it can be an atheist effort, but it must be a very conscious one.

As my life experience is the opposite - to me religious faith is really exotic - it seems to me that the difference is that religious folks and perhaps post-religious atheists for a generation or two keep moral things real - talk about right and wrong as if it was something as tangible as money. But in my experience it washes out after a few generations and then only money, power, status stay as real things, because they receive social feedback but right and wrong doesn't.

To put it differently - religions form communities. Atheists often just hang out. So there is a tendency to form looser, not so tightly knit social interactions. In a tight community, right and wrong gets feedback, people judge each other. When interaction becomes looser, it is more like why give a damn what some random guy thinks about what I did t was right or wrong? But things like power, status, money still work even in loose interactions, so people become more machiavellian. At least that is my experience. I am not claiming this universalistic goodness cannot be maintained in atheistic cultures, I am just claiming it requires a special, conscious effort, it does not flow from human nature.

I also think you typically get this "goodness" culture if you participate in a culture who thinks of themselves as high-status winners, on an international comparison. Sharing is a way to show this status, this surplus. It requires a certain lack of bitterness. If you feel like your group was maltreated by greater powers, or invaders etc. you will probably stick to the group. Thus sharing still happens in less winner cultures but more on a personal level, family, friends, not with strangers.

This over-optimism about goodness is a typical feature of LW and the Rationality book, so I guess you will feel more at home here than I do. To me it comes accross as mistaking the culture of the US as human nature.

I have not formulated this exactly, but I think there is such a thing as a "winner bias". It is very easy for someone from the Silicon Valley to think the behaviors there are universal, precisely because being powerful and succesful gives on the "privilege" to ignore everything you don't like to see. The most extreme form is a dictator thinking everybody agrees because nobody dares not to, but it also exists in a moderate form, that the voices of more succesful people and cultures being louder, hence coming accross as more popular, more universal unless you know the alternatives first-hand. However they are pretty sure not universal - if they were the whole world would be as succesful as SV. Well, or at least closer.

For example, a typical "winner bias" may be reading interviews with succesful CEOs and thinking this is how all CEOs think. No - but the mediocre ones don't get interviewed. So it is more of an availability heuristic. The availability heuristic forms a winner bias making first-worlders think everybody thinks like first-worlders, because voices that are not propped up by success are not heard accross oceans. The other way around is not true, of course.

However I think this "winner bias" is more than just an availability heuristic. Probably it has also something to do with not having egos hurt from other groups having higher status.

I agree that "goodness" is luxury; only people who do not have serious problems (at least for the given moment) can afford it; or those who have cultivated it in the past and now they keep it by the power of habit. On the other hand, I believe that it is universal in the sense that if a culture can afford it, sooner or later some forms of "goodness" will appear in that culture. There will be a lot of intertia, so if a culture gains a lot of resources today, they will not change their behavior immediately. The culture may even have some destructive mechanisms that will cause it to waste the resources before the "goodness" has a chance to develop.

Sorry for not being more specific here, but I have a feeling that we are talking about something that exists only in a few lucky places, but keeps reappearing in different places at different times. It is not universal as in "everyone has it", but as in "everyone has a potential to have it under the right circumstances".

[-][anonymous]9y00

Not just surplus, there are empirical records of poor people in rich soceties donating more to charity than rich people in rich soceties. I think there is also something going on with the whole of society as such, not just people's personal feelings of surplus or not.

[-][anonymous]9y00

First, I should say that I didn't mean to assert that some goodness could be found in everyone. I personally guess it is, but that wasn't what this post was about. I just meant that happiness and goodness are the only two things that seem like ultimate motivators for people. Not that everyone has both, just that all actions are motivated by one and/or the other.

Anyway...

My point is that this "goodness" is probably more culturally focused than genetic.

I guess we don't really know :( I like the idea of it being more genetic than cultural, but you could just as well be right. I did the cursory google search of "is altruism genetic" and found some cool studies, but studies only tell us that genes contribute somewhat not how much they contribute relative to culture. But culture is human-driven too. Even something like vegetarianism's growing popularity, which is a bit more global and has nothing to do with religion, could show that some people are generally becoming less self-centered? Or what about the decrease in imperialism? The budding effective altruism movement?

Anyway, I get what you're saying. I think I came up with this idea to convince myself that humanity would get along just fine without religion. So I'm biased in favor of the idea that goodness is largely genetic, and still on the upswing, since that's a nice and comforting thought, but I guess that since don't know the exact ratio of how much genetics contributes relative to culture, we're safer off assuming that it's mostly cultural. If we decide we still like this product of our culture and don't want to lose it, then we should definitely put conscious effort into keeping some idea of "goodness" alive in society.

I just meant that happiness and goodness are the only two things that seem like ultimate motivators for people.

Um... survival? sex? power? curiosity?

You can, of course, make "happiness" a sufficiently large blanket to cover everything, but then you lose any meaning in the term.

[-][anonymous]9y00

You can, of course, make "happiness" a sufficiently large blanket to cover everything, but then you lose any meaning in the term.

(shrug) Yeah, I consider it a huge blanket. I didn't really mean to share some grand revelation or anything, just the realization that all our thoughtful decisions (as opposed to those influenced by addiction, inertia, etc) seem to be made either to lead us, as individuals, to our optimal mind-states, and/or to benefit others.

Yeah, I consider it a huge blanket.

If it's so huge, why did you choose to separate out "goodness"? It fits under the blanket quite well -- people who help others get happiness (or get into the desired mind-state) from helping others.

[-][anonymous]9y10

Good question!! Introspectively asking myself the same thing is what led to my confusion, which led me to analyze everything and come up with what I wrote about.

So personally, when I donate to effective charities like AMF, I do get some benefits. I like my self-image more, I feel a little bit warm and fuzzy, I feel less guilty about having been born into such a good life. Helping others in this way does improve my mind-state. Yet, if all I wanted to do were increase my own happiness, there would be more efficient ways to go about it. Let's say I donate 15% of my income to AMF. The opportunity cost of that donation could be a month long vacation to visit my friends in Guatemala, a trip home to see my family in Wisconsin, ski trips, or random acts of kindness like leaving huge restaurant tips. If my only goal is achieving my optimal mind-state, after much introspection, I'm 99% sure I would be better off donating a bit less to charity (but still enough to maintain my self-image) and visiting my family and friends a bit more. So why do I still want to donate the amount I do? This really confused me. Was my donation irrational? You might say it was motivated by guilt, that I would feel guilty for not donating. And I'd say yeah, to some extent, but not quite enough to justify what I'm giving up.

This is my personal example, the one that sparked this post, but it's definitely not the best example. The best example of goodness is sacrificial death. I suppose you could still claim that even someone who knowingly dies to rescue a stranger would have felt soo guilty if he hadn't done it, that he was acting to stop his mind-state from dipping into the negatives, or something. Or he imagined great honor after his death, and that short-lived happy expectation motivated the action. Honestly, you could be right, and again, my doubt isn't based on anything more than guessing at subconscious motivation, but I'm just guessing that goodness is the motivation here, not happiness. Just like I'm guessing that goodness is what motivates me to donate to effective charities, not deeply subconscious guilt. I don't know the true motivation, but goodness seems like a better guess to me than even huge-blanket-happiness.

Yet, if all I wanted to do were increase my own happiness, there would be more efficient ways to go about it.

That is true for all non-optimal ways of increasing your own happiness.

The best example of goodness is sacrificial death.

So, suicide bombers? X-/

I don't know the true motivation, but goodness seems like a better guess to me than even huge-blanket-happiness.

May I suggest internalized social pressure as a motivation? :-)

[-][anonymous]9y00

That is true for all non-optimal ways of increasing your own happiness.

Yes, but practically every other time I recognize myself non-optimally increasing my own happiness (usually due to inertia), I want to fix it and achieve optimal happiness. But not this time.

So, suicide bombers? X-/

I'm guessing here, so correct me if I'm wrong, but I think that they truly believe they're doing God's will. They truly believe God's will is, by definition, good. So maybe they're acting out of their own twisted idea of goodness, or perhaps more likely, they're just acting in a way that they believe will increase their happiness once they receive eternal rewards.

May I suggest internalized social pressure as a motivation? :-)

You certainly may... it's like the tragedy of group selectionism...When we observe species who cannibalize their young, it's a bit harder to imagine an isolated human mother ever sacrificing herself to save her child. But could such a "altruism emotion" gene have evolved? I think the evolution behind it makes sense, and that there are some studies that show this, but I'm far from being an expert on the topic.

I think that "social pressure" motivations are closely related to "guilt" motivations and still fall under the huge-blanket category of happiness. I think they can be a huge factor behind seemingly altruistic decisions, but I don't think they tell the whole story...

I guess we don't really know :( I like the idea of it being more genetic than cultural, but you could just as well be right. I did the cursory google search of "is altruism genetic" and found some cool studies, but studies only tell us that genes contribute somewhat not how much they contribute relative to culture.

How about reading some history, or better yet things written by cultures other then your own. If you read really old cultures, e.g., Homer, you can get glimpses of the observation that it never seems to have occurred to these cultures that there is anything wrong with killing people who aren't members of one's tribe.

Now look at the way the rioters in Baltimore are behaving right now.

[-][anonymous]9y00

Again, the point of this post was not to argue that goodness exists. I understand that people are mostly selfish, and that even the ones who seem altruistic could be mostly motivated by warm fuzzies and avoiding feelings of guilt, or fitting in with their cultures. So I'm not saying we can find goodness in every action, or even most actions... but I am saying we can find it as the ultimate motivator in at least a few actions.

We live in the most peaceful time in history. Is this current peace and decrease in imperialism part of a positive trend, or just a high point on a crazy zigzag line? Have there been other long periods of (relative) peacefulness back in history?

You disliked my comment. Why? Are you saying goodness is not genetic at all? Or that history makes it soo obvious that culture is the only significant factor, that I should shrug off any studies that show goodness seems partially genetic and not allow them increase my optimism in any way?

Have there been other long periods of (relative) peacefulness back in history?

Yes.

[-][anonymous]9y00

Oh, yeah.... thanks for answering (embarrassed blush for not using Google and not remembering about that even though I'm pretty sure I've heard it before)

Or more reacently, the period between the Congress of Vienna and WWI.

Um. You are forgetting the various wars of the Ottoman Empire. And the Russian Empire. And the French revolution with associated aftershocks. And the Germans (e.g. the Austro-Prussian war). And once we get out of Europe, there were wars aplenty in the Western hemisphere, extremely bloody rebellions in China (the Taiping Rebellion) and India (the Indian Rebellion of 1857), etc. etc.

I'm assuming those minor wars don't count here for the same reason els isn't counting things like the Korean and Vietnam wars, the various wars in the Middle East, or the civil wars associated with the War on Drugs.

Edit: Oh yes, also the various de/post-colonial wars, the wars in the Congo, etc.

I think you're confusing "it was a peaceful and VERY successful century for Great Britain" with "it was the time of peace in the world".

It was about as peaceful as the current time.

Are you saying goodness is not genetic at all?

What do you mean by goodness? If by goodness you mean what els (or more generarly your culture considers "good" then yes, goodness has a large cultural component.

On the other hand, as in this thread, you mean a willingness to sacrifice for what one believes to be a good cause, then yes it probably has a large generic component. Except, "what one believes to be a good cause" has a large cultural component.

For example, as Lumifer mensioned suicide bombers blowing themselves up to spread the true faith. Or the Nazis, who as the tide of war turned against them, diverted resources from the war effort to making sure future generations of Europeans will have fewer Jews corrupting their culture, even if they're rulled by those ungrateful Allies.

[-][anonymous]9y20

In the modern world, goodness is generally understood as wanting others to be happy and not suffer. Sounds like the Golden Rule, as most people want to be happy and not suffer themselves, and goodness is understood as wishing the same for others. To be fair, it does look like a little bit of a narrow view, I remember Roger Scruton remarking that if your philosophy is equally suitable for humans and swine then you may need to rethink something (i.e. happy as a pig in the mud cannot really be the only terminal value, wishing it for everybody cannot be the only terminal goodness), but this is the social consensus today.

[-][anonymous]9y00

Ah, then you might like "Град обреченный" (The doomed city) by A&B Strugatsky:)

[-][anonymous]9y00

Except, "what one believes to be a good cause" has a large cultural component.

This is true. Sometimes people think they know what's best for society and are wrong.

Anyway, I don't know how much of our culture's seeming to care about others is cultural vs. genetic. I think it's unlikely to be 100% vs. 0%, but I'm not making any further claims than that. If you say that goodness doesn't exist at all, ever, that no one really naturally cares about anyone other than themselves, I'll disagree, but I have no evidence to back this up; as far as I know, both of us would just be guessing at what subconsciously motivates people...

Anyway, I don't know how much of our culture's seeming to care about others is cultural vs. genetic.

Depends on which 'others'.

[-][anonymous]9y20

I think that's probably a good point. You would say that genetics has more to do with caring for those close to us, and culture has more to do with caring for strangers we'll never meet, right?

Anyway, I got back from listening to this podcast and would recommend it if you're interested! I liked it and learned some things. Here's the blurb, as you can see it's relevant to this whole discussion:

"Compassion is a universal virtue, but is it innate or taught? Have we lost touch with it? Can we be better at it? In this hour, TED speakers explore compassion: its roots, its meaning and its future."

Good post. I think you are thinking about morality correctly, and I share your feelings about the sentiments behind virtue ethics and consequentialism not being particularly dissimilar or totally incompatible.

Um... so my high school Calculus teacher, who is lots, lots smarter than I am, thinks "emotion" is evidence of intelligent design. I thought this was just a "god of the gaps" thing, but maybe it is really the simplest explanation. I think most people here have ruled out an omnipotent, omni-benevolent God... but maybe life on earth as we know it is really just an alien-child's abandoned science fair project or something.

So: I'm noting your de-conversion story cites emotional reasons and thoughts about morality, rather than epistemic parsimony (the idea that simpler explanations are more likely). Personally, I see this as a "wrong reason" to de-convert.

You can't start out thinking about what is good, and arrive at what is factual. The universe doesn't care about what we think is good. Reality can be messed up and weird, there's no rule against it, so discovering that some of the logical implications of a belief are messed up and weird morally does not mean that belief is false.

And you do allude to this, in this quoted paragraph. You've basically said, "okay, maybe we don't have an omni-benevolent God, maybe reality is messed up and weird, an abandoned science project".

But while a realization that religious explanations are morally and emotionally unsatisfying, followed by a realization that reality is allowed to be morally and emotionally unsatisfying might lead to superficially rational-ish beliefs by removing religious impediments to clear thinking, this is not the same thing as matter-of-factly concluding "religious explanations are false because they're simply too complicated to be true" without even touching upon emotions and morals.

maybe it is really the simplest explanation

The shortcut to refute that is "but why did the alien god want to do such a science project? That just brings us back to the same question. In positing another conscious being, we haven't added anything to our explanation of the preferences of conscious beings."

Here's the thing: Gods (and conscious beings in general) are very complicated. People seem simpler because we have mental shortcuts to model people, we evolved to deal with people, all our lives people have been doing stuff. But when you step back outside of that, you see that "a conscious being did it" is actually a much more complicated explanation than a five-thousand page essay detailing how a complex phenomenon arose from chemical soup. Conscious beings are really complicated and you can't invoke them without introducing a whole bunch of complexity concerning how they are structured into your hyopthesis. And that's the right reason to deconvert, or to change one's beliefs about reality. The simplicity of a logically consistent explanation should increase your estimate of its likelihood of it being correct. The moral niceness should have no baring.

TL:DR - parsimony + the realization that humans suffer from a sort of "illusion" that consciousness is fundamentally simple = deconversion for epistemically correct reasons. Not ethical or hedonistic considerations.

And I think that coming more fully into that realization might help with the whole "I'm still not very satisfied with the idea of something being an end-in-itself" problem. It shouldn't help, because preferences are preferred regardless of how they are created, whether by god or biochemical soup, but I think it does anyway, because when you intuitively know you've hit the logical bottom as far as justifications the "dissatisfied" feeling goes away.

(I'm guessing you know all of this at some level and are mostly kidding with the alien science project hypothesis - perhaps I'm explaining something that doesn't really need to be explained for you at this point. I just thought, given that morality might still be implicitly tangled up in god within your psyche, maybe thinking about parsimony explicitly when thinking about this question will help.)

[-][anonymous]9y40

Thanks, and thanks for your thoughtful reply! I had to look up the definition of parsimony, but I think that idea helps a lot.

So: I'm noting your de-conversion story cites emotional reasons and thoughts about morality, rather than epistemic parsimony (the idea that simpler explanations are more likely). Personally, I see this as a "wrong reason" to de-convert.

My story was just a story, really. Not an argument. I probably did de-convert for emotional reasons, but also because I recognized that I only believed what I believed because I was raised believing it. Obviously, there was a chance that I just happened to be born into the one true religion, but I figured if that were the case, I would find my way back there as I examined the evidence. I wanted to start from a clean slate.

The shortcut to refute that is "but why did the alien god want to do such a science project?

Yeah, you're right. Although I didn't even consider "moral niceness" or lack of since it really wouldn't affect our lives in any way. But okay, I'm already convinced it's not the "simplest" answer... I will edit that part out :)

I'm already convinced it's not the "simplest" answer

I love how people on lesswrong change minds so readily

And I'm still not very satisfied with the idea of something being an end-in-itself:

So, this feeling of dis-satisfaction you are reporting is commonly termed "Existential Angst". "Existentialism" is the idea that morality has no basis in anything deeper than the individual. It's common after deconversions and is related to the whole "God is Dead" Nietzsche thing, and the question of how we can start rebuilding a framework for morality beyond mere hedonism from that point.

The reason I thought explicitly introducing parsimony into your thinking toolkit would help is that maybe once one internalizes that consciousness is complicated and not something which just happens, perhaps the "alien god" will get a little less alien. At some point, I think you'll stop feeling like your preferences and values were arbitrarily chosen by cold random unfeeling processes, and start feeling like the physics driving the "alien god" is really just a natural part of you, and that your values and preferences are a really integral part of you and you start treating those things with an almost religious reverence. I think once you really understand all that goes into making you conscious and where "good' comes from, the whole thing stops being cold and unfeeling and starts being warm and satisfying.

I was never a Christian or theist in the first place so I didn't go through precisely the same experience (I was loosely Hindu and I suspect transitioning from pantheism to reductionism is much easier, especially given the focus on destroying the illusion of a coherent "I" in vedic religions)...But, sometime around entering high school my views on topics such as stem cells and abortion and animal treatment began to shift due to acquiring a reductionist view of consciousness. So I think understanding, at least in principle, how moral stuff and consciousness can be implemented by ordinary non-conscious matter and getting comfy with the idea that souls are constructed out of solid brain tissue that we can see and touch helps a lot when one grapples with moral questions and what they are rooted in.

[-][anonymous]9y50

I love how people on lesswrong change minds so readily

Hahahaha I completely interpreted this as sarcasm at first. I'm obviously still getting used to lesswrongers myself :)

So, this feeling of dis-satisfaction you are reporting is commonly termed "Existential Angst". "Existentialism" is the idea that morality has no basis in anything deeper than the individual. It's common after deconversions and is related to the whole "God is Dead" Nietzsche thing, and the question of how we can start rebuilding a framework for morality beyond mere hedonism from that point.

Yeah. Do you know what got me started on this whole idea? I linked to it at the bottom of the article, but I was asking if there was any good reason to pursue ambition over total hedonism, and I now think that the answer is "goodness is an end-in-itself too" and I'm pretty okay with it.

At some point, I think you'll stop feeling like your preferences and values were arbitrarily chosen by cold random unfeeling processes, and start feeling like the physics driving the "alien god" is really just a natural part of you, and that your values and preferences are a really integral part of you and you start treating those things with an almost religious reverence. I think once you really understand all that goes into making you conscious and where "good' comes from, the whole thing stops being cold and unfeeling and starts being warm and satisfying.

Wow, I really like how you put that. Other people have tried to share a similar concept with me, but it always seemed cheesy and superficial. It never really started to sink in until now. I think it was the words "natural" and "warm" that did it for me. So thanks!

I linked to it at the bottom of the article, but I was asking if there was any good reason to pursue ambition over total hedonism, and I now think that the answer is "goodness is an end-in-itself too" and I'm pretty okay with it.

The way I look at it is, I'm good because that is what I prefer. There are many possible futures. I prefer some of those futures more than the others. I try my best to choose my favorite future with my actions. "Goodness" is part of what I prefer to happen, which is why I choose it. (And a version of me which didn't prefer goodness wouldn't be me, preferring goodness is a pretty big part of what goes into the definition of "me".)

Wow, I really like how you put that. Other people have tried to share a similar concept with me, but it always seemed cheesy and superficial. It never really started to sink in until now. I think it was the words "natural" and "warm" that did it for me. So thanks!

Very glad I could be helpful! I find Neil D.Tyson / Sagan-esque talk kinda cheesy too. But I remember when I was a kid dabbling in philosophy, thinking hard about free will and monitoring my own thoughts for any trace of randomness, and suddenly it just became really clear that my thoughts and feelings followed predictable processes and there wasn't any sharp boundary between the laws governing objects and the laws governing minds. It was kind of a magical moment, I felt pretty connected to the universe and all that jazz. It is cheesy, but it's pretty hard to talk about these sorts of spiritual-ish experiences without sounding cheesy.

I don't have anything to add to the discussion, but in the interest of being phatic I just want to say that this is a great introductory post -- welcome to LessWrong!

[-][anonymous]9y00

Thanks!

Phatic is a great term, definitely adopting for use in my own vocabulary!

I had seen a comment on the open thread asking about in which part of the body people felt their "sense of self" and if it changes, and I wanted to contribute "that's so bizarre to me, I've never felt a sense of self anywhere, but I find this discussion interesting" but realized it added nothing to their discussion and stopped myself. I might be more phatic in the future though, now that I have a friendly disclaimer to use. :)

I commented in that thread myself and what you've said seems a worthy addition even without a disclaimer; it adds at least as much to the discussion as this post which nobody has downvoted. (of course, it might seem easy for me to say your comment should be posted if I'm not the once risking the karma punishment for doing so, so note that I'd be willing to copy/paste what you've said and take any punishment/reward for myself if you'd like)

[-][anonymous]9y00

Ok, if you say so, I'll go chip in my two cents!

[-][anonymous]9y20

We want to do things that make ourselves happy, and we want to do things that make others happy.

One way to test whether we all want to do things that make others happy is to read a book or two. Try "Human Smoke" by Nicholson Baker, for instance. Another test would be to spend part of a day in prison or a mental hospital. But the most direct means I found to disabuse myself of the idea we all want to do things that make others happy is to meet more people. Having met more people, I am now more appreciative of the not-all people who not-all of the time want to be happy and see happiness. And I get made less not-happy because I no longer think everyone is terminally trying to make me happy.

It could not be less wrong that all hearts are as your heart.

Part 2

One ultimate psychological motivation can trump even goodness, and that's the second terminal virtue: personal happiness.

If goodness was a terminal virtue, then how could it ever be trumped by anything? Actually, I think there's an answer to this. To me, being a terminal virtue seems to mean that you value it regardless of whether it leads to anything else. Contrast this with "I value X only to the extent that it leads to Y". But if you have more than one terminal virtue, it seems to follow that you'd have to choose which one you value more, and thus one can trump another. I'd address these points.

Anyway, so are you saying that the drive for happiness trumps that of goodness? In most people? If so, to be clear, is it your opinion that happiness and goodness really are terminal goals/virtues of people, or are you just saying that "They are terminal virtues, but in cases where you have to choose, I think happiness trumps goodness"?

We usually want what makes us happy. I want what makes me happy. Spending time with family makes me happy. Playing board games makes me happy. Going hiking makes me happy. Winning races makes me happy. Being open-minded makes me happy. Hearing praise makes me happy. Learning new things makes me happy. Thinking strategically makes me happy. Playing touch football with friends makes me happy. Sharing ideas makes me happy. Feeling free makes me happy. Adventure makes me happy. Even divulging personal information makes me happy.

1) You are too cool!

2) From a literary perspective, that's a great job of illustrating with example.

Happiness and goodness might be subconsciously motivating them to choose this instrumental goal. Few people can introspect well enough to determine what's truly motivating them.

Indeed. I think belief in belief would be a great thing to bring up here. Furthermore, I think that explaining it, not just bringing it up, would be a good idea. Ie. a religious person might claim that he wants to become Christ-like even if it meant certain drops in happiness and goodness over the long term. But he may actually act differently, and if he does, then his actual drives oppose what he claims his drives are.

So anytime a Christian does anything but pray for others, do faith-strengthening activities, spread the gospel, or earn money to donate to missionaries, he is anticipating as if God/hell doesn't exist.

Or perhaps willfully disobeying Him? Which actually seems rather likely to me, because most religious people seem to not follow the instructions with 100% comprehensiveness. As someone raised as a reform Jew, I'm all too aware of this, and always wondered how you could pick and choose what instructions you follow. Perhaps more religious people are different, but my impression was that they follow more like 80-90% of the instructions.

Imagine how many deconversions we would see if it were suddenly sinful to play football, watch TV with your family, or splurge on tasty restaurant meals.

Or maybe we'd just see some interesting new rationalizations! I get your point though.

Basically everyone who says this believes "God wants what's best for us, even when we don't understand it."

I'm not sure what you mean by "best for us" here. Ie. do people believe that God wants happiness for them, goodness for society, or both? (And a new question just came to me - what does God think of animal rights?)

Becoming happy [section]

Your claim in this section seems to be that the terminal virtue of happiness trumps that of goodness (usually?). To really argue this, I think you'd need a lot more evidence.

But given that this is just a section of a larger article, you have limited space. Perhaps a solid intuitive argument could be made in that space, but I didn't find your examples to be intuitively general enough. Ie. if you gave examples that made me think, "Oh yeah, we do things like that in sooooo many different situations", then I would have been more convinced by your claim.

Whatever that stuff is, is what accounts for individual variance in which virtues are pursued.

My strong consequentialist instincts may be giving me a particularly hard time here... but I would specify that you're referring to instrumental virtues. When I think "virtue", I just instinctively think "terminal", and thus I had to reread this a few times before understanding it.

Also worth noting is the individual variance in the extent to which an individual is consciously motivated by happiness vs. goodness. If you look at the preference ratios between the two values, sociopaths are found at one end of the spectrum; extreme altruists, the other. Most of us fall somewhere in the middle.

We talked for a while about preference ratios and altruism ratios, so I know what you mean, but I don't think you explained it thoroughly enough.

Preference ratio := how much I care about me : how much I care about person X

Altruism ratio := "I act altruistically because it will lead to goodness" : "I act altruistically because it will lead to my happiness"

I think that these are two fantastic terms, and that they should be introduced into the "vocabulary of morality".

For most people, the only true terminal values are happiness and goodness.

I think what you meant is that for most people, their only terminal values are happiness and goodness. Terminal values belong to a person. Using the word "the" makes it sound like it's some sort of inherent property of the universe (to me at least).

And I'm still not very satisfied with the idea of something being an end-in-itself: [section]

Nicely done!

Why should we be controlled by emotions that originated through random chance?

Wrong question. It's not a matter of whether they should control us. It's a fact that they do.

Exactly! Not many people seem to understand this.

Um... so my high school Calculus teacher, who is lots, lots smarter than I am, thinks "emotion" is evidence of intelligent design. I thought this was just a "god of the gaps" thing, but maybe it is really the simplest explanation. I think most people here have ruled out an omnipotent, omni-benevolent God... but maybe life on earth as we know it is really just an alien-child's abandoned science fair project or something.

The former two lines felt like such a great place to end :(

Why bring up the possibility of intelligent design here? You already mention the alien-god of evolution which implies that there is no intelligent design (I think; I just read the wiki article for the first time quickly). Regardless, the origin of the universe/emotions doesn't seem too relevant and felt like an awkward ending to me.

In short, it was the first time I've had a conversation with a fellow "rationalist" and it was one of the coolest experiences I've ever had in my life.

Likewise!! On both counts.


For the record, I went really hard on you here. I would say "don't take it personally", but I know that you won't ;)

[-][anonymous]9y10

Anyway, so are you saying that the drive for happiness trumps that of goodness? In most people? If so, to be clear, is it your opinion that happiness and goodness really are terminal goals/virtues of people, or are you just saying that "They are terminal virtues, but in cases where you have to choose, I think happiness trumps goodness"?

Nah, either one can trump the other, depending on the situation and the individual.

[flattery]

Thanks :)

Indeed. I think belief in belief would be a great thing to bring up here. Furthermore, I think that explaining it, not just bringing it up, would be a good idea. Ie. a religious person might claim that he wants to become Christ-like even if it meant certain drops in happiness and goodness over the long term

But I bring that up right in the next paragraph! It fits with both, but do you really think it belongs with 'become Christ-like' over 'become obedient to God's will' ? Or are you saying that I should double mention it it twice?

Or perhaps willfully disobeying Him? Yeah, that too! But that one's so obvious, isn't it? Here, we're talking about people who would actually claim that their terminal goal is to "become obedient" and I don't think you as a reform Jew would ever have claimed that...

I'm not sure what you mean by "best for us" here. Ie. do people believe that God wants happiness for them, goodness for society, or both? (And a new question just came to me - what does God think of animal rights?)

That's the point, haha, they don't know for sure because only God knows God's will! As for animal rights, I know only a few Christians who are into it, out of all the many Christians I know, only two are vegetarian... most believe God gave man dominion over animals, which means we take care of them and eat them. Some will also misinterpret Peter's vision in Acts 10 and cite this as God giving us permission to eat meat, but most will cite Genesis and man's "dominion"

Your claim in this section seems to be that the terminal virtue of happiness trumps that of goodness (usually?). To really argue this, I think you'd need a lot more evidence.

(sigh) If you really think I'm making that argument, or any argument (see my comment to your Part 1), then I really need to practice my writing. :(

When I think "virtue", I just instinctively think "terminal", and thus I had to reread this a few times before understanding it.

(nods) Good, because this was more of my goal, to get people to rethink where to draw the boundary.

I think what you meant is that for most people, their only terminal values are happiness and goodness. Terminal values belong to a person. Using the word "the" makes it sound like it's some sort of inherent property of the universe (to me at least).

Oops, let me rephrase that to be more clear. "The only true terminal values are happiness and goodness." Thanks. I do think it's like some sort of inherent property of the universe or something.

The former two lines felt like such a great place to end :(

You're right!!!! That was silly of me. Ending on "emotion" just reminded me of that conversation and I wanted to get some feedback, but I shouldn't have been so lazy and should have asked about it on an open thread or something.

Likewise!! On both counts.

:-)

Oops, let me rephrase that to be more clear. "The only true terminal values are happiness and goodness." Thanks. I do think it's like some sort of inherent property of the universe or something.

To me, saying that it's an inherent property of the universe sounds like "this is the way it is for everyone, and this is the way it always will be". I don't think either of those things are true. You've previously said that you think it's true for the overwhelming majority of people, not everyone. I'm not sure what you think about "this is the way it always will be". A simple argument against that is that you could just rewire someone's brain to produce different drives.

Of course, this is just what I interpret "the only true terminal values are happiness and goodness" and "I do think it's like some sort of inherent property of the universe or something" to imply. I sense that it's a common interpretation, but I'm not sure.

Anyway:

1) I think semantics aside, we agree that a good deal of people posses these as their terminal virtues. (I think it's less common than you do, but I do agree that it's true for a good majority of people)

2) Semantics may be annoying, but they're important for communicating, and communicating is important. It's my impression that your writing could be a lot better if the semantics were improved.

[-][anonymous]9y00

To me, saying that it's an inherent property of the universe sounds like "this is the way it is for everyone, and this is the way it always will be". I don't think either of those things are true. You've previously said that you think it's true for the overwhelming majority of people, not everyone. I'm not sure what you think about "this is the way it always will be". A simple argument against that is that you could just rewire someone's brain to produce different drives.

My position has become a bit more extreme then. I am guessing it's true for everyone, and I do think the universe itself is behind it. I suppose it could change, sure. Whether it's an "inherent property of the universe" might come back to that word "inherent" and whether or not you think "inherent" includes "eternal." I don't think we disagree about anything real here.

1) I think semantics aside, we agree that a good deal of people posses these as their terminal virtues. (I think it's less common than you do, but I do agree that it's true for a good majority of people)

Only a majority? So do you think: (1) Some people have no desire for personal happiness, (2) Some people have no desire for goodness, or (3) There is some other psychologically motivated end-in-itself that can't be traced back to one of the two?

Part 1

This is probably the first "philosophical" thought I've had in my life

Haha, good one. Humor is often a good way to open :)

happy

I assume you mean "desirability of mind-state". People associate the word "happy" with a lot of different things, so I think it's worth giving some sort of operational definition (could probably be informal though).

So I suspect a certain commonality among human beings in that we all actually share the same terminal values, or terminal virtues.

I think a quick primer on consequentialism vs. virtue ethics would be appropriate. a) Some people might not know the difference. b) It's a key part of what you're writing about and so a refresher feels like it'd be useful.

You use the phrase "terminal virtues" without first defining it. I don't think it's an "official" term, and I don't think it "has enough behind it" where people could infer what it means.

I think you should more clearly distinguish between what's a question for the social sciences, and what's a question for philosophy.

Social sciences:

1) Do people claim to be consequentialists, or virtue ethicists?

2) Do people act like consequentialists, or virtue ethicists? Ie. what would the decisions they make imply about their beliefs?

3) What are the fundamental things that drive/motivate people? Can it always be traced back to happiness or goodness (as you define them)? Or are there things that drive people independent of happiness and goodness? Example: say that someone claims to value truth. Would they tell the truth if they knew for a fact that it would lead to less happiness and goodness in the long-run?

One of the key points you seem to be making is that as far as 3) goes, for the overwhelming majority of people, their drives/motives can be traced to happiness or goodness. But what does it mean for a drive to be traced to something? Well, my thought is that drives depend on what we truly care about. We may have a drive for X, but if we only care about X to the extent that it leads to Y, then Y is what we truly care about, and I predict that the drive for X will only be as strong as the expectation that X -> Y (although I'm sure the relationship isn't perfectly linear; humans are weird).

However, this is a question for the social sciences. The way to figure it out would be to study it scientifically. Ie. by observing how people act and feel in different situations. In particular, since it involves people, the domain would be one of the social sciences.

Philosophy:

1) Does anything have "intrinsic value"?

2) What does having "intrinsic value" even mean exactly? How would the world look if things had intrinsic value? How would it look if things didn't have intrinsic value?

3) What about morality? What does it mean for something to be moral/good? How do these rules get determined?

My stance is that a) the words I mention above are hard to use because they don't have precise and commonly accepted definitions, and b) terminal goals are completely arbitrary. Ie. you can't say that killing people is a bad terminal goal. You can only say that "killing people is bad if... you want to promote a sane and happy world. (Instrumental) rationality is about being good at achieving our ends. But it doesn't help us pick our ends.

I don't want to believe this though. I've been conditioned to feel like ends are good/bad, despite my understanding. And I've been conditioned to seek purpose, ie. to find and seek "good" ends. Because of the way I've been conditioned, I don't like believing that goals are completely arbitrary, but unfortunately it's the view that makes the most sense to my by very large margins.

Often, but not always, these two desires go hand-in-hand.

I don't think it's completely clear what this means. I think you mean "doing good tends to also make us happy". You do end up saying this, but I think you say it two sentences too late. Ie. I'd say "doing good tends to also make us happy" before using the hand-in-hand phrase, and before talking about the "components" of happiness (I'd use the word determinants, which is a bit of a nitpick).

psychological motivators

I have a feeling that this isn't the right term. Regardless, I'd explain what you mean by it.

handing out money through personal-happiness-optimizing random acts of kindness

Aka warm fuzzies.

As rational human beings, we occasionally will consciously choose to inefficiently optimize our personal happiness for the sake of others.

Very important point: If you're claiming that doing so is rational, then one of two things must be the case:

1) You alter your claim to say that it's rational... presuming a terminal value of goodness.

2) You argue that a terminal value of goodness is rational.

As I read, I couldn't help but think that virtue ethics and consequentialism are not really so different at heart.

Another very important point: distinguish theory from practice.

As I understand it:

  • In theory, they're complete opposites. A virtue ethicist would say, "X is just inherently virtuous. It doesn't matter what the consequences are." A consequentialist would say that it does depend on the consequences. Someone might say, "But consequentialists have to choose terminal values don't they?" My response, "Yes, but they admit that this is an arbitrary decision. They don't claim that these terminal values are virtuous (as I understand it)."
  • In practice, virtue ethicists often pursue things to achieve the end of being virtuous, and their virtues are often very very similar to the terminal values of consequentialists. At the end of the day, their virtues are pretty much just happiness and goodness. And at the end of the day, these are often the terminal values that consequentialists choose. I think that this is the point that you were making. And I thank you for making it, because I didn't really pay much attention to that fact. My overly literal and reductionist approach failed to lead me to notice how important the practical outcome is. Furthermore, I'm not sure how true this is, but it seems that in practice, a lot of consequentialists believe that their terminal goals do posses inherent virtue, in which case the lines do get really fuzzy between consequentialism and virtue ethicism.
[-][anonymous]9y10

Thanks for the tips! Adding some a brief primer on virtue ethics and consequentialism is a good idea, and I think you're right that this whole idea is more relevant to the social sciences than philosophy. Did you actually want answers to those questions, or were they just to help show me the kind of questions that belong in each category? Great distinction at any rate, I'll go change that word "philosophical" to "intellectual" now.

I think you noticed, or at least, you've now led me to notice, that I'm not really interested in the "in theory" at all, or in struggling over definitions. I'm just trying to show that what is actually happening "in practice" and suggest that whether someone calls himself a virtue ethicist or a consequentialist doesn't change the fact that he is psychologically motivated (for lack of a better term) to pursue happiness and goodness. I think what I'm trying to do with this article is help figure out where we should draw a boundary.

b) terminal goals are completely arbitrary. Ie. you can't say that killing people is a bad terminal goal. You can only say that "killing people is bad if... you want to promote a sane and happy world. (Instrumental) rationality is about being good at achieving our ends. But it doesn't help us pick our ends.

I think this might have been my whole point, that our real ends aren't as arbitrary as we think. It seems to me that in practice there are really just two ends that humans pursue. Nothing else seems like an end-in-itself. Killing people can be an instrumental goal that someone consciously or subconsciously thinks will make him happy, that will lead him to his optimal mind-state. He might be wrong about this; it might not actually lead him to his optimal mind-state. Or maybe it does. Either way, it doesn't matter in the context of this discussion whether we classify killing as "wrong" or not, it matters what we do about it. In the real world, we're motivated, by our own desires for personal happiness and goodness, to lock up killers.

Very important point: If you're claiming that doing so is rational, then one of two things must be the case:

But I'm not claiming it's rational... I'm not claiming anything, and I'm not arguing anything or proving any point. I'm just describing how I observe that people who seem very rational can still maximize their personal happiness inefficiently. The resulting idea is that goodness seems like an end-in-itself, and a relatively universal one, so we should recognize it as such.

The main takeaway I'm getting from your advice is that I should try to make it clear in this article that I'm not attempting to prove a point, but rather just to "carve along the joints" and offer a clearer way of looking at things by lumping happiness and goodness into the same category.

Perhaps one other way we could describe what is actually happening in practice would be to say that virtue ethicists pursue their terminal values more subconsciously while consequentialists pursue the same terminal values more consciously.

Did you actually want answers to those questions, or were they just to help show me the kind of questions that belong in each category?

The latter.

I think you noticed, or at least, you've now led me to notice, that I'm not really interested in the "in theory" at all, or in struggling over definitions.

I didn't know you weren't interested in it at all, but I knew you were more interested in the practice part. Come to think of it, I suspect that you're exaggerating in saying that you don't really care about it at all.

I'm just trying to show that what is actually happening "in practice" and suggest that whether someone calls himself a virtue ethicist or a consequentialist doesn't change the fact that he is psychologically motivated (for lack of a better term) to pursue happiness and goodness.

Well said. In your article, I think that some of the language implies otherwise, but I don't like talking about semantics either and I think the important point is that this is clear now.

The other important point is that I've screwed up and need to be better. I have an instinct to interpret things literally. I also try hard to look for what people probably meant given more contextual-type clues, but I've partially failed in this instance, and I think that all the information was there for me to succeed.

I think this might have been my whole point, that our real ends aren't as arbitrary as we think. It seems to me that in practice there are really just two ends that humans pursue.

I think that we agree, but let me just make sure: ends are arbitrary in the sense that you could pick whatever ends you want, and you can't say that they're inherently good/bad. But they aren't arbitrary in the sense that what actually drives us isn't arbitrary at all. Agree?

But I'm not claiming it's rational... I'm not claiming anything, and I'm not arguing anything or proving any point. I'm just describing how I observe that people who seem very rational can still maximize their personal happiness inefficiently. The resulting idea is that goodness seems like an end-in-itself, and a relatively universal one, so we should recognize it as such.

Let me try to rephrase this to see if I understood and agree: "People who seem very rational seem to act in ways that don't maximize their personal happiness. One possibility is that they are trying to optimize for personal happiness but failing. I think it's more likely that they are optimizing for goodness in addition to happiness. Furthermore, this seems to be true for a lot of people."

[-][anonymous]9y00

I didn't know you weren't interested in it at all, but I knew you were more interested in the practice part. Come to think of it, I suspect that you're exaggerating in saying that you don't really care about it at all.

Hah, and I thought I was literal. I guess I'm interested in knowing the "in theory" just so I can make connections (like adherents to different moral systems have different tendencies in terms of making decisions consciously vs. subconsciously) to the "in practice"

The other important point is that I've screwed up and need to be better. I have an instinct to interpret things literally. I also try hard to look for what people probably meant given more contextual-type clues, but I've partially failed in this instance, and I think that all the information was there for me to succeed.

But at the same time, you've really helped me figure out my point, which wouldn't have happened if you said "nice article, I get what you're saying here." In regular life conversations, it's better to just think about what someone meant and reply to that, but for an article like this, it was totally worthwhile for you to reply to what I actually said and share what you thought it implied.

I think that we agree, but let me just make sure: ends are arbitrary in the sense that you could pick whatever ends you want, [and you can't say that they're inherently good/bad.] But they aren't arbitrary in the sense that what actually drives us isn't arbitrary at all. Agree?

The bracketed part I don't care about. Discussing "inherently good/bad" seems like a philosophical debate that hinges on our ideas of "inherent." The rest, I agree :) We seem to choose which actions to take arbitrarily, and through those actions we seemingly arbitrarily position ourselves somewhere on the happiness-goodness continuum.

Let me try to rephrase this to see if I understood and agree: "People who seem very rational seem to act in ways that don't maximize their personal happiness. One possibility is that they are trying to optimize for personal happiness but failing. I think it's more likely that they are optimizing for goodness in addition to happiness. Furthermore, this seems to be true for a lot of people.

Great wording! May I plagiarize?

Why did the alien-god give us emotions?

The alien-god does not act rationally.... The origin of emotion ultimately seems like the result of random chance.

Emotions are likely as useless as other things the alien-god gave us, things like eyes and livers and kidneys and sex-drives and fight-or-flight responses.

Emotions appear to drive social cooperation at least between mammals. Human partnership with dogs is mediated and cemented by emotions, I think only someone who has spent no time with dogs could disagree with this observation. Emotions and their expressions are common enough between humans and dogs that they probably exist across, at least, a broad swath of mammals.

Just two examples of what emotions get us: 1) pair-bonding leading to effective partnership at raising our extremely needy young, and 2) a nearly irresistibly powerful impetus to get the hell away from scary animals, especially if they surprise us at night. Pretty clearly, both of these are quite useful to our survival and so these emotions would have been developed by natural selection for fitness, just as the kidney's ability to clean blood and they eye's ability to focus would have been.

[-]oge9y00

Hey els, thanks for posting your thoughts. It'd be nice if you put a summary in the first paragraph seeing as the article is so long.

Welcome to LessWrong! I wouldn't comment if I didn't like your post and think it was worth responding to, so please don't interpret my disagreement as negative feedback. I appreciate your post, and it got me thinking. That said. I disagree with you.

The real point I'm making here is that however we categorize personal happiness, goodness belongs in the same category, because in practice, all other goals seem to stem from one or both of these concepts.

Your claims are probably much closer to true for some people than they are for me, but they are far from accurate for characterizing me or the people who come most readily to mind for me.

Depending on what you mean by goals, either happiness doesn't really affect my goals, or the force of habit is one of the primary drivers of my goals. Happiness is a major influence on my ordinary behavior, but is seldom something that I think about very much when making long term plans. (I have thought about thinking about happiness in my long term plans, and decided against doing so because striving after personal happiness in my long term plans does not fit with my personal sense of identity even though it is reasonably consistent with my personal sense of ethics.) Like happiness/enjoyment, routine is a major driver of my everyday behavior, and while it is somewhat motivated by happiness, it comes more from conditioning, much of which was done to me by other people, and much of which I chose for myself. Most of the things I do are simply the things I do out of habit.

When I choose things for myself and make long term plans, virtue/goodness is something that I consider, but I also consider things that are far from being virtue/goodness as you used the term and as most other people use the term. The two things that immediately spring to mind as part of my considerations are my sense of identity/self-image and my desire to be significant.

I was an anglophile in my teenage years, and one of the lasting consequences of that phase of my life is that I Do Not Drink Coffee. This isn't because I don't think I should drink coffee. This isn't because I think drinking coffee would make me less happy. It is simply because drinking coffee is one of the things that I do not do. I drink tea. I would be less myself, from my own perspective, if I started drinking coffee than I am by continuing to not drink coffee and by sometimes drinking tea. Not drinking coffee is part of what it means to me to be me.

My dad is a lifelong Cubs fan. I have sometimes joked to him that one of the things he could do to immediately make his life happier is to quit being a Cubs fan and become a Yankees fan. My dad cares about sports. He would be happier if he was a Yankees fan but he is not a Yankees fan. (You could argue that this is loyalty, but I would disagree... My dad's from the Midwest but he lives on the East Coast now. When other people move from one part of the country to another and their sports allegiances change he doesn't find that surprising, upsetting, or in any way reprehensible. There are other aspects of life where he does believe people are morally obligated to be loyal, and he finds it reprehensible when other people violate family loyalty and other forms of loyalty that he believes are morally obligatory.)

In terms of strength of terminal values, a sense of personal identity is, in most of the cases that I can think of, stronger than a desire for happiness and weaker than a desire to be good. Sort of. Not really. That's just what I want to say and believe about myself but it's not true. It's easier for me to give an example having to do with sports than one having to do with tea. (Sorry, I grew up with them... and they spring to mind as vivid examples much more so than other subjects, at least for me.)

I'm a very fickle sports fan by most standards. I don't really have a sport I enjoy watching in particular, and I don't really have a team that a cheer for, but every once in a while, I will decide to watch sports... usually a tournament. And then I'll look at a bunch of stats and read a bunch of commentary, and pick what team I think deserves to win, and cheer for that team, for that tournament. Once I pick a team, I can't change my mind until the tournament is over... It's not that I don't want to or think I shouldn't. It's that even when I think that I ought to change my mind, I still keep cheering for the same team as I did before...

Sometimes, I don't realize that I'll be invited over to someone else's house for one of the games. Sometimes, when this happens, I'm cheering for a different team than everyone else, and I feel extremely silly for doing this and a little embarrassed about it because I'm not really a fan of that team. They're just the team I picked for this tournament. So I'll go over to someone's house, and I'll try to root for the same team as everyone else, and it just won't work. The home team is ahead, and I'll smile along with everyone else. I won't get upset that my team is losing. People won't realize that my team is losing; they'll just think I don't care that much about the game... but then, if my team starts to make a comeback, I suddenly get way more interested in the game. I'll start to reflexively move in certain ways. I'll pump my fist a little when they score. I'll try to keep my gestures and vocalization subtle and under control; I'm still embarrassed about rooting for that team... But I'm doing it anyways, because that's my team, at least for today. Then when the home team comes back again and wins it, I'm disappointed, and I'm even a little more embarrassed about rooting against them than I would have been if they'd lost. This wouldn't change even if I had some ethical reason for wanting the other team to win. If (after the tournament had begun and I'd picked what team I was cheering for) some wealthy donor announced that he was going to make some big gift to a charity that I believe in if and only if his team won, and his team didn't happen to be my team... I would start to feel like I should want his team to win. I know who I cheer for doesn't affect the outcome of the game, but I still feel like it would be more ethical to cheer for the team that would help this philanthropic cause if it won. I'd try to root for them just like I'd try to cheer for the home team if I got invited over to a friend's house to watch the game. But I wouldn't actually want that team to win. When the game started and the teams started pulling ahead of and falling behind each other as so often happens in games, my enthusiasm for the game would keep increasing as my team was pulling ahead and keep falling off again when they started losing ground. It's just what happens when I watch sports.

My sense of identity also affects my life choices and long term plans. For example, many of my career choices have had as much to do with what roles I can see myself in, as they have they have had to do with what I think would make me happy, what I do well, and what impact I want to have on the world. I think most people can identify with this particular feeling and this comment is long enough already, so I won't expand on it for now...

By far, the biggest motivator of my personal goals, however, is significance. I want to matter. I don't want to be evil, but mattering is more important to me than being good... The easiest way for me to explain my moral feelings about significance, is to say that, in practice, I am far more of a deontologist than I am in theory. Karl Marx is an example of someone who matters, but was not what I would call good. He dramatically impacted the world and his net impact has probably been negative, but he didn't behave in any way that would lead me to consider him evil, so he's not evil. I would rather become someone who matters and is someone who I would consider good. Norman Borlaug is a significant person whose contributions to the world are pretty much unambiguously good. (Though organic food movement people and other Luddites would erroneously disagree.) Bach, Picasso, and Nabokov are all examples of other people who are extremely significant without necessarily have done anything I would call good. They've had a lasting impact on this world.

I want to do that... I don't want to be the sort of person who would do that. I don't want to have the traits that allowed Bach to write the sort of music in his time that would be remembered in our time. I want to carve the words "Austin was here" so deep into the world that they can never be erased. (Metaphorically, of course.) I want to matter.

...and not just in that "everybody is important, everybody matters" sort of way...

I would much rather be happy, good, and significant than any two of the three. If I can only be two, I would want to be good and significant. And if I can only be one, I would want to be significant. I don't want to be evil... there are some things I wouldn't do even if doing them guaranteed that I would become significant. A few lines I would not cross: I wouldn't rape or torture anyone. I wouldn't murder someone I deemed to be innocent. But if the devil and souls were real, I might sell my soul.

Interestingly, the lines I wouldn't cross for the greater good are different from the lines I wouldn't cross to obtain significance. I would kill somebody I deemed to be innocent to save the lives of a hundred innocent people... but not to save just two or three innocent people. On the other hand, if the devil and souls were real and he came to me with the offer, I wouldn't sell my soul to save the lives of a hundred or even a thousand people I deemed to be innocent though I would seriously consider selling my soul to obtain significance. Whatever my values are, they are not well-ordered. (Which is not quite the same as saying they are illogical, though many would interpret it that way.)

[-][anonymous]9y00

Hi, thanks for your reply! I'm not yet sure that we actually disagree. What do you think of with the word happiness? If you're thinking of happiness simply as "pleasure" then I would agree, that pleasure and goodness alone are not the only psychological motivators. I used happiness to describe someone's preferred mind-state, the mind-state in which someone would feel the most content. So it's different for everyone. Some people are happy just to follow their impulses and live in the moment, but other personality types are happier when they have a strong sense of identity, which seems to be what you're describing.

You also say you want to matter. I think the belief that we will be remembered after our deaths is a one that would lead to happiness, too, so we want to act in such a way that would encourage this belief in ourselves.

I identify with a lot of what you're saying. I'm less identity-driven than most people, but there are still certain things about myself (being frugal, for example) that, even if I knew changing them would bring me pleasure, I wouldn't want to simply because I consider them part of my identity. Although it doesn't make complete sense to me, I think that this small sense of identity contributes to my happy mind-state.

So I'm guessing that your idea of happiness was just a bit more narrow than mine was? But we probably still agree?

Do people actually believe that no one in England drinks coffee at all?

Well, they have something they claim is coffee here...

[-][anonymous]9y00

I don't have anything to add to the discussion, but in the interest of being phatic I just want to say that this is a great introductory post -- welcome to LessWrong!

[This comment is no longer endorsed by its author]Reply

I haven't read much about ethics on LessWrong or in general, but I recall reading, in passing, things to the effect of, "Consequentialism (more specifically, utilitarianism) and deontology are not necessarily incompatible." I always assumed that they were reconciled by taking utility maximization as the ideal case and deontology as the tractable approximation. "There may be situations in which killing others results in greater utility than not killing others, and those situations may even be more common than I think, but given imperfect information, limited time and cognitive resources, and the fact that not-killing usually results in better outcomes than killing, Thou Shalt Not Kill."

If the idea is that deontology is good for instilling actual moral behavior in the real world because it's more practically applicable and more psychologically appealing, then I think that virtue ethics is an even better fit, because it's even more psychologically appealing, and therefore even more likely to be exercised in practice. Compare, "Not killing maximizes utility on average," "Killing is wrong," and "It is virtuous to be a person who does not kill." I'm reminded of Roles Are Martial Arts for Agency.

And I'm still not very satisfied with the idea of something being an end-in-itself.

Hmm. I used to get depressed about natural explanations, but that was because I felt like something changed about the world when I learned something about it; but it was just my mind that changed. My emotions aren't just the alien god's fitness-maximizing adaptations, in the sense that they're less than what they were before in some sense. They're just the same thing. I think that you feel like the equation goes: Emotions - 'meaning' = Alien-God's-Fitness-Maximizing-Adaptations. I feel like it goes: Emotions = Alien-God's-Fitness-Maximizing-Adaptations. I still hope and dream and love and all of those nice, warm fuzzy things; I just know what they are now and what caused them, as opposed to them being mysterious.

Like, what do you think is missing in emotions and the like that you thought was there before? How do you expect this world to differ from the world that you thought you lived in? From the things that you're talking about, I'm assuming that you're referring to differences besides there being no afterlife. You don't have to entertain me if you don't want to.

[-][anonymous]9y00

I've come to realize that the reason I wrote this post was not to discuss the ethical systems at all. I'm not trying to discuss a general guideline for morality. I'm merely analyzing what leads humans in practice to make decisions. I think that happiness and goodness belong in the same category, and that this has to do with where to draw the boundary. I think the difference in practice between how a professing virtue ethicist acts and how a professing consequentialist acts is that the virtue ethicist tends to make more decisions on a subconscious level, while the consequentialist tends to make more decisions on a conscious level. Comparatively speaking.

Edit: I've tweaked the article a bit to better reflect this idea!

I still hope and dream and love and all of those nice, warm fuzzy things; I just know what they are now and what caused them, as opposed to them being mysterious.

Yeah, same, I think :) Any discontent I feel about that is on an understanding level, not an emotional level. Or rather, I didn't fully understand the cause, but thanks to your explanation, now I understand a lot better. I think I'm going to change that heading.

I used to feel like this world was different than the world that I thought I lived in because of that whole "thief" thing I talked about near the beginning of the article. Coming to the conclusion that I did about goodness being a universal terminal value has helped me sort things out in my mind and acknowledge that I can follow my "conscience" without considering myself "irrational" for sometimes doing stuff, like effective altruism, that inefficiently optimizes my personal happiness.

Um... so my high school Calculus teacher, who is lots, lots smarter than I am, thinks "emotion" is evidence of intelligent design.

I have gone in a similar direction at times, finding my own consciousness to be evidence of a general place for consciousness in the universe. That is, if the machine that is me is conscious of itself in this universe, then EITHER there is a magic sky-father who pastes consciousness onto some kinds of machines but not others, at his whim OR there is a physics-of-consciousness that we observe produces consciousness in us but which would then lead us to expect that there is consciousness in things other than us. So while your calculus teacher goes to theism, apparently, upon reflecting on his own emotions (and presumably his consciousness of them), I tended to go to pan-theism, not in the sense that there is something that omniscient or omnibenevolent, but that there is some principle of consciousness in the physical universe we live in of which our own particular kind of consciousness is just one instance.

I would not think as ill of your calculus teacher about this as many other a-theists might. Consciousness is real and not something we have any physics for, yet.

[-][anonymous]9y20

I definitely don't think ill of him either. It's really just something I don't understand.

but which would then lead us to expect that there is consciousness in things other than us.

Interesting... I might give this idea a bit of thought in the future, thank you :)

I think that would be panpsychism, not pantheism.