You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

adamzerner comments on On not getting a job as an option - Less Wrong Discussion

36 Post author: diegocaleiro 11 March 2014 02:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread. Show more comments above.

Comment author: adamzerner 16 April 2015 01:22:27AM *  0 points [-]

For reference:

But I don't want to admit my preference ratios are that far out of whack; I don't want to be that selfish.

Why?

I think that exploring and answering that question will be helpful.

Try thinking about it in two ways:

1) A rational analysis of what you genuinely think makes sense. Note that rational does not mean completely logically.

2) An emotional analysis of what you feel, why you feel it, and in the event that your feelings aren't accurate, how can you nudge them to be more accurate.

You:

I want to answer this question with "because emotion!" Is this allowed?

Also:

Analyze emotion?? Can you do that?! As an istp, just identifying emotion is difficult enough.

Absolutely! That's how I'd start off. But the question I was getting at is "why does your brain produce those emotions". What is the evolutionary psychology behind it? What events in your life have conditioned you to produce this emotion?

By default, I think it's natural to give a lot of weight to your emotions and be driven by them. But once you really understand where they come from, I think it's easier to give them a more appropriate weight, and consequently, to better achieve your goals. (1,2,3)

And you could manipulate your emotions too. Examples: You'll be less motivated to go to the gym if you lay down on the couch. You'll be more motivated to go to the gym if you tell your friends that you plan on going to the gym every day for a month.

So I think that based solely on my intuition here, I might disagree with you and find personal happiness and altruism to be two separate terminal goals, often harmonious but sometimes conflicting.

So you don't think terminal goals are arbitrary? Or are you just proclaiming what yours are?

Edit:

But honestly I rarely think about it, and I'm 99.99% sure the overall impact on my happiness is much smaller than if I were to use the money to fly to Guatemala and take a few weeks' vacation to visit old friends. Yet, even as I acknowledge this, I still want to donate. I don't know why.

Are you sure that this has nothing to do with maximizing happiness? Perhaps the reason why you still want to donate is to preserve an image you have of yourself, which presumably is ultimately about maximizing your happiness.

(Below is a thought that ended up being a dead end. I was going to delete it, but then I figured you might still be interested in reading it.)

Also, an interesting thought occurred to me related to wanting vs. liking. Take a person who starts off with only the terminal goal of maximizing his happiness. Imagine that the person then develops an addiction, say to smoking. And imagine that the person doesn't actually like smoking, but still wants to smoke. Ie. smoking does not maximize his happiness, but he still wants to do it. Should he then decide that smoking is a terminal goal of his?

I'm not trying to say that smoking is a bad terminal goal, because I think terminal goals are arbitrary. What I am trying to say is that... he seems to be actually trying to maximize his happiness, but just failing at it.

DEAD END. That's not true. Maybe he is actually trying to maximize his happiness, maybe he isn't. You can't say whether he is or he isn't. If he is, then it leads you to say "Well if your terminal goal is ultimately to maximize your happiness... then you should try to maximize your happiness (if you want to achieve your terminal goals)." But if he isn't (just) trying to maximize happiness, he could add in whatever other terminal goals he wants. Deep down I still notice a bit of confusion regarding my conclusion that goals are arbitrary, and so I find myself trying to argue against it. But every time I do I end up reaching a dead end :/

Anyway, I went back through the book and found the title of the post. It's Terminal Values and Instrumental Values. You can jump to "Consider the philosopher."

Thank you! That does seem to be a/the key point in his article. Although "I value the choice" seems like a weird argument to me. I never thought of it as a potential counter argument. From what I can gather from Eliezer's cryptic rebuttal, I agree with him.

I still don't understand what Eliezer would say to someone that said, "Preferences are selfish and Goals are arbitrary".


1- Which isn't to imply that I'm good at this. Just that I sense that it's true and I've had isolated instances of success with it.

2 - And again, this isn't to imply that you shouldn't give emotions any weight and be a robot. I used to be uncomfortable with just an "intuitive sense" and not really understanding the reasoning behind it. Reading How We Decide changed that for me. 1) It really hit me that there is "reasoning" behind the intuitions and emotions you feel. Ie. your brain does some unconscious processing. 2) It hit me that I need to treat these feelings as Bayesian evidence and consider how likely it is that I have that intuition when the intuition is wrong vs. how likely it is that I have the intuition when the intuition is right.

3 - This all feels very "trying-to-be-wise-sounding", which I hate. But I don't know how else to say it.

Comment author: [deleted] 17 April 2015 06:10:29AM *  0 points [-]

Oops, just when I thought I had the terminology down. :( Yeah, I still think terminal values are arbitrary, in the sense that we choose what we want to live for.

So you think our preference is, by default, the happiness mind-state, and our terminal values may or may not be the most efficient personal happiness-increasers. Don't you wonder why a rational human being would choose terminal goals that aren't? But we sometimes do. Remember your honesty in saying:

Regarding my happiness, I think I may be lying to myself though. I think I rationalize that the same logic applies, that if I achieve some huge ambition there'd be a proportional increase in happiness. Because my brain likes to think achieving ambition -> goodness and I care about how much goodness gets achieved. But if I'm to be honest, that probably isn't true.

I have an idea. So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness? Like one human was born with an "altruism mutation" and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios? It's a pleasant thought, anyway.

But honestly, I literally didn't even know what evolution was until several weeks ago though, so I don't really belong bringing up any science at all yet; let me switch back to personal experience and thought experiments.

For example, let's say my preferences are 98% affected by selfishness and maybe 2% by altruism, since I'm very stingy with my time but less so with my money. (Someone who would die for someone else would have different numbers.) Anyway, on the surface I might look more altruistic because there is a LOT of overlap between decisions that are good for others and decisions that make me feel good. Or, you could see the giant overlap and assume I'm 100% selfish. When I donate to effective charities, I do receive benefits like liking myself a bit more, real or perceived respect from the world, a small burst of fuzzy feelings, and a decrease in the (admittedly small) amount of personal guilt I feel about the world's unfairness. But if I had to put a monetary value on the happiness return from a $1000 donation, it would be less than $1000. When I use a preference ratio and prefer other people's happiness, their happiness does make me happy, but there isn't a direct correlation between how happy it makes me and the extent to which I prefer it. So maybe preference ratios can be based mostly on happiness, but are sometimes tainted with a hint of genuine altruism?

Also, what about diminishing marginal returns with donating? Will someone even feel a noticable increase in good feelings/happiness/satisfaction giving 18% rather than 17%? Or could someone who earns 100k purchase equal happiness with just 17k and be free to spend the extra 1k on extra happiness in the form of ski trips or berries or something (unless he was the type to never eat in restaurants)? Edit: nevermind this paragraph, even if it's realistic, it's just scope insensitivity, right?

But similarly, let's say someone gives 12% of her income. Her personal happiness would probably be higher giving 10% to AMF and distributing 2% in person via random acts of kindness than it would giving all 12% to AMF. Maybe, you're thinking that this difference would affect her mind-state, that she wouldn't be able to think of himself as such a rational person if she did that. But who really values their self-image of being a rational opportunity-cost analyzer that highly? I sure don't (well, 99.99% sure anyway).

Sooo could real altruism exist in some people and affect their preference ratios just like personal happiness does, but to a much smaller extent? Look at (1) your quote about your ambition (2) my desire to donate despite my firm belief that the happiness opportunity cost outweighs the happiness benefits (3) people who are willing to die for others and terminate their own happiness (4) people who choose to donate via effective altruism rather than random acts of kindness

Anyway, if there was an altruism mutation somewhere along the way, and altruism could shape our preferences like happiness, it would be a bit easier to understand the seeming discrepancy between preferences and terminal goals, between likes and wants. Here I will throw out a fancy new rationalist term I learned, and you can tell me if I misunderstand it or am wrong to think it might apply here... occam's razor?

Anyway, in case this idea is all silly and confused, and altruism is a socially conditioned emotion, I'll attempt to find its origin. Not from giving to church (it was only fair that the pastors/teachers/missionaries get their salaries and the members help pay for building costs, electricity, etc). I guess there was the whole "we love because He first loved us" idea, which I knew well and regurgitated often, but don't think I ever truly internalized. I consciously knew I'd still care about others just as much without my faith. Growing up, I knew no one who donated to secular charity, or at least no one who talked about it. The only thing I knew that came close to resembling large-scale altruism was when people chose to be pastors and teachers instead of pursuing high-income careers, but if they did it simply to "follow God's will" I'm not sure it still counts as genuinely caring about others more than yourself. On a small-scale, my mom was really altruistic, like willing to give us her entire portion of an especially tasty food, offer us her jacket when she was cold too, etc... and I know she wasn't calculating cost-benefit ratios, haha. So I guess she could have instilled it in me? Or maybe I read some novels with altruistic values? Idk, any other ideas?

I still don't understand what Eliezer would say to someone that said, "Preferences are selfish and Goals are arbitrary".

I'm no Eliezer, but here's what I would say: Preferences are mostly selfish but can be affected by altruism, and goals are somehow based on these preferences. Whether or not you call them arbitrary probably depends on how you feel about free will. We make decisions. Do our internal mental states drive these decisions? Put in the same position 100 times, with the same internal mental state, would someone make the same decision every time, or would it be 50-50? We don't know, but either way, we still feel like we make decisions (well, except when it comes to belief, in my experience anyway) so it doesn't really matter too much.

Comment author: adamzerner 18 April 2015 02:08:05AM *  0 points [-]

The way I'm (operationally) defining Preferences and words like happy/utility, Preferences are by definition what provides us what the most happiness/utility. Consider this thought experiment:

You start off as a blank slate and your memory is wiped. You then are experience some emotion, and you experience this emotion to a certain magnitude. Let's call this "emotion-magnitude A".

You then experience a second emotion-magnitude - emotion-magnitude B. Now that you have experienced two emotion-magnitudes, you could compare them and say which one was more preferable.

You then experience a third emotion magnitude, and insert it into the list [A, B] according to how preferable it was. And you do this for a fourth emotion-magnitude. And a fifth. Until eventually you do it for every possible emotion-magnitude (aka conscious state aka mind-state). You then end up with a list of every possible emotion-magnitude ranked according to desirability. [1...n]. These, are your Preferences.

So the way I'm defining Preferences, it refers to how desirable a certain mind-state is relative to other possible mind-states.

Now think about consequentialism and how stuff leads to certain consequences. Part of the consequences is the mind-state it produces for you.

Say that:

  • Action 1 -> mind-state A
  • Aciton 2 -> mind-state B

Now remember mind-states could be ranked according to how preferable they are, like in the thought experiment. Suppose that mind-state A is preferable to mind-state B.

From this, it seems to me that the following conclusion is unavoidable:

Action 1 is preferable to Action 2.

In other words, Action 1 leads you to a state of mind that you prefer over the state of mind that Action 2 leads you to. I don't see any ways around saying that.

To make it more concrete, let's say that Action 1 is "going on vacation" and Action 2 is "giving to charity".

  • IF going on vacation produces mind-state A.
  • IF giving to charity produces mind-state B.
  • IF mind-state A is preferable to mind-state B.
  • THEN going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to.

I call this "preferable", but in this case words and semantics might just be distracting. As long as you agree that "going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to" when the first three bullet points are true, I don't think we disagree about anything real, and that we might just be using different words for stuff.

Thoughts?


Don't you wonder why a rational human being would choose terminal goals that aren't?.

I do, but mainly from a standpoint in being interested in human psychology. I also wonder from a standpoint of hoping that terminal goals aren't arbitrary and that they have an actual reason for choosing what they choose, but I've never found their reasoning to be convincing, and I've never found their informational social influence to be strong enough evidence for me to think that terminal goals aren't arbitrary.

So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness? Like one human was born with an "altruism mutation" and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios? It's a pleasant thought, anyway.

:))) [big smile] (Because I hope what I'm about to tell you might address a lot of your concerns and make you really happy.)

I'm pleased to tell you that we all have "that altruism mutation". Because of the way evolution works, we evolve to maximize the spread of our genes.

So imagine that there's two Mom's. They each have 5 kids, and they each enter an unfortunate situation where they have to choose between themselves and their kids.

  • Mom 1 is selfish and chooses to save herself. Her kids then die. She goes on to not have any more kids. Therefore, her genes don't get spread at all.
  • Mom 2 is unselfish and chooses to save her kids. She dies, but her genes live on through her kids.

The outcome of this situation is that there are 0 organisms with selfish genes, and 5 with unselfish genes.

And so humans (and all other animals, from what I know) have evolved a very strong instinct to protect their kin. But as we know, preference ratios diminish rapidly from there. We might care about our friends and extended family, and a little less about our extended social group, and not so much about the rest of people (which is why we go out to eat instead of paying for meals for 100s of starving kids).

As far as evolution goes, this also makes sense. A mom that acts altruistically towards her social circle would gain respect, and the tribes respect may lead to them protecting that mom's children, thus increasing the chances they survive and produces offspring themselves. Of course, that altruistic act by the mom may decrease her chances of surviving to produce more offspring and to take care of her current offspring, but it's a trade-off.* On the other hand, acting altruistically towards a random tribe across the world is unlikely to improve her children's chances of surviving and producing offspring, so the mom's that did this have historically been less successful at spreading genes than the mom's that didn't.

*Note: using mathematical models to simulate and test these trade-offs is the hard part of studying evolution. The basic ideas are actually quite simple.

But honestly, I literally didn't even know what evolution was until several weeks ago though

I'm really sorry to hear that. I hope my being sorry isn't offensive in any way. If it is, could you please tell me? I'd like to avoid offending people in the future.

so I don't really belong bringing up any science at all yet;

Not so! Science is all about using what you do know to make hypothesis about the world and to look for observable evidence to test them. And that seems to be exactly what you were doing :)

Your hypotheses and thought experiments are really impressive. I'm beginning to suspect that you do indeed have training and are denying this in order to make a status play. [joking]

Like one human was born with an "altruism mutation" and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios?

I'd just like to offer a correction here for your knowledge. Mutations spread almost entirely because they a) increase the chances that you produce offspring or b) increase the chances that the offspring survive (and presumably produce offspring themselves).

You seem to be saying that the mutation would spread because the organism remains alive. Think about it - if an organism has a mutation that increases the chances that it remain alive but that doesn't increase the chances of having viable offspring, then that mutation would only remain in the gene pool until he died. And so of all the bajillions of our ancestors, only the ones still alive are candidates for the type of evolution you describe (mutations that only increase your chance of survival). Note that evolution is just the process of how genes spread.

Note: I've since realized that you may know this already, but figured I'd keep it anyway.


I got a "comment too long error" haha

Comment author: [deleted] 18 April 2015 05:03:52AM *  0 points [-]

Okay, I guess I should have known some terminology correction was coming. If you want to define "happiness" as the preferred mind-state, no worries. I'll just say the preferred mind-state of happiness is the harmony of our innate desire for pleasure and our innate desire for altruism, two desires that often overlap but occasionally compete. Do you agree that altruism deserves exactly the same sort of special recognition as an ultimate motivator that pleasure does? If so, your guess that we might not have disagreed about anything real was right.

IF going on vacation produces mind-state A. IF giving to charity produces mind-state B. IF mind-state A is preferable to mind-state B. THEN going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to.

Okay...most people want some vacation, but not full-time vacation, even though full-time vacation would bring us a LOT of pleasure. Doing good for the world is not as efficient at maximizing personal pleasure as going on vacation is. An individual must strike a balance between his desire for pleasure and his desire to be altruistic to achieve Harmonious Happiness (Look, I made up a term with capital letters! LW is rubbing off on me!)

I'm pleased to tell you that we all have "that altruism mutation". Because of the way evolution works, we evolve to maximize the spread of our genes.

Yay!!! I didn't think of a mother sacrificing herself for her kids like that, but I did think the most selfish, pleasure-driven individuals would quite probably be the most likely to end up in prison so their genes die out and less probably, but still possibly, they could also be the least likely to find spouses and have kids.

I'm really sorry to hear that. I hope my being sorry isn't offensive in any way. If it is, could you please tell me? I'd like to avoid offending people in the future.

I almost never get offended, much less about this. I appreciate the sympathy! But others could find it offensive in that they'd find it arrogant. My thoughts on arrogance are a little unconventional. Most people think it's arrogant to consider one person more gifted than others or one idea better than others. Some people really are more gifted and have far more positive qualities than others. Some ideas really are better. If you happen to be one of the more gifted people or understand one of the better ideas (evolution, in this case), and you recognize yourself as more gifted or recognize an idea as better, that's not arrogance. Not yet. That's just an honest perspective on value. Once you start to look down on people for being less gifted than you are or having worse ideas, that's when you cross the line and become arrogant. If you are more gifted, or have more accurate ideas, you can happily thank the universe you weren't born in someone else's shoes, while doing your best to imagine what life would have been like if you were. You can try to help others use their own gifts to the best of their potential. You can try to share your ideas in a way that others will understand. Just don't look down on people for not having certain abilities or believing the correct ideas because you really can't understand what it's like to be them :) But yeah, if you don't want to offend people, it's dangerous to express pity. Some people will look at your "feeling sorry" for those who don't share your intelligence/life opportunities/correct ideas and call you arrogant for it, but I think they're wrong to do so. There's a difference between feeling sorry for people and looking down on them. For example, I am a little offended when one Christian friend and her dad who was my high school Calculus teacher look down on me. Most of my other friends just feel sorry for me, and I would be more offended if they didn't, because feeling sorry at least shows they care.

Your hypotheses and thought experiments are really impressive. I'm beginning to suspect that you do indeed have training and are denying this in order to make a status play.

I'm flattered!! But I must confess the one thought experiment that was actually super good, the one at the end about free will, wasn't my idea. It was a paraphrase of this guy's idea and I had used it in the past to explain my deconversion to my friends. The other ideas were truly original, though :) (Not to say no one else has ever had them! Sometimes I feel like my life is a series of being very pleasantly surprised to find that other people beat me to all my ideas, like how I felt when I first read Famine, Affluence and Morality ten years after trying to convince my family it was wrong to eat in restaurants)

I'd just like to offer a correction here for your knowledge. Mutations spread almost entirely because they a) increase the chances that you produce offspring or b) increase the chances that the offspring survive (and presumably produce offspring themselves).

Hey, this sounds like what I was just reading this week in the rationality book about Adaptation-Executers, not Fitness-Maximizers! I think I get this, and maybe I didn't write very clearly (or enough) here, but maybe I still don't fully understand. But if someone is nice to have around, wouldn't he have fewer enemies and be less likely to die than the selfish guys? So he lives to have kids, and the same goes for them? Idk.

Note: I just read your note and now have accordingly decreased the probability that I had said something way off-base :)

Comment author: adamzerner 18 April 2015 05:07:54PM 0 points [-]

I'll just say the preferred mind-state of happiness is the harmony of our innate desire for pleasure and our innate desire for altruism, two desires that often overlap but occasionally compete. Do you agree that altruism deserves exactly the same sort of special recognition as an ultimate motivator that pleasure does? If so, your guess that we might not have disagreed about anything real was right.

I agree that in most cases (sociopaths are an exception) pleasure and doing good for others are both things that determine how happy something makes you. And so in that sense, it doesn't seem that we disagree about anything real.

But you use romantic sounding wording. Ex. "special recognition as an ultimate motivator".

"ultimate motivator"

So they way motivation works is that it's "originally determined" by our genes, and "adjusted/added to" by our experiences. So I agree that altruism is one of our "original/natural motivators". But I wouldn't say that it's an ultimate motivator, because to me that sounds like it implies that there's something final and/or superseding about altruism as a motivator, and I don't think that's true.

"special recognition"

I'm going to say my original thought, and then I'm going to say how I have since decided that it's partially wrong of me.

My original thought is that "there's no such thing as a special motivator". We could be conditioned to want anything. Ie. to be motivated to do anything. The way I see it, the inputs are our genes and our experiences, and the output is the resulting motivation, and I don't see how one output could be more special than another.

But that's just me failing to use the word special as is customary by a good amount of people. One use of the word special would mean that there's something inherently different about it, and it's that use that I argue against above. But another way people use it is just to mean that it's beautiful or something. Ie. even though altruism is an output like any other motivation, humans find that to be beautiful, and I think it's sensible to use the word special to describe that.

This all may sound a lot like nitpicking, and it sort of is, but not really. I actually think there's a decent chance that clarifying what I mean by these words will bring us a lot closer to agreement.

Okay...most people want some vacation, but not full-time vacation, even though full-time vacation would bring us a LOT of pleasure. Doing good for the world is not as efficient at maximizing personal pleasure as going on vacation is.

True, but that wasn't the point I was making. I was just using that as an example. Admittedly, one that isn't always true.

Yay!!!

I'm curious - was this earth shattering or just pretty cool? I got the impression that you thought that humans are completely selfish by nature.

So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness?

And that this makes you sad and that you'd be happier if people did indeed have some sort of altruism "built in".

I didn't think of a mother sacrificing herself for her kids like that, but I did think the most selfish, pleasure-driven individuals would quite probably be the most likely to end up in prison so their genes die out and less probably, but still possibly, they could also be the least likely to find spouses and have kids.

I think you may be misunderstanding something about how evolution works. I see that you now understand that we evolve to be "altruistic to our genes", but it's a common and understandable error to instinctively think about society as we know it. In actuality, we've been evolving very slowly over millions of years. Prisons have only existed for, idk, a couple hundred? (I realize you might understand this, but I'm commenting just in case you didn't)

My thoughts on arrogance are a little unconventional.

Not here they're not :) And I think that description was quite eloquent.

I used to be bullied and would be sad/embarrassed if people made fun of me. But at some point I got into a fight, ended it, and had a complete 180 shift of how I think about this. Since then, I've sort of decided that it doesn't make sense at all to be "offended" by anything anyone says about you. What does that even mean? That your feelings are hurt? The way I see it:

a) Someone points out something that is both fixable and wrong with you, in which case you should thank them and change it. And if your feelings get hurt along the way, that's just a cost you have to incur along the path of seeking a more important end (self improvement).

b) Someone points out something about you that is not fixable, or not wrong with you. In that case they're just stupid (or maybe just wrong).

In reality, I'm exaggerating a bit because I understand that it's not reasonable to expect humans to react like this all the time.

It was a paraphrase of this guy's idea and I had used it in the past to explain my deconversion to my friends.

Haha, I see. Well now I'm less impressed by your intellect but more impressed with your honesty!

Sometimes I feel like my life is a series of being very pleasantly surprised to find that other people beat me to all my ideas

Yea, me too. But isn't it really great at the same time though! Like when I first read the Sequences, it just articulated so many things that I thought that I couldn't express. And it also introduced so many new things that I swear I would have arrived at. (And also introduced a bunch of new things that I don't think I would have arrived at)

But if someone is nice to have around, wouldn't he have fewer enemies and be less likely to die than the selfish guys? So he lives to have kids, and the same goes for them? Idk.

Yeah, definitely!

Comment author: [deleted] 18 April 2015 05:49:18PM 0 points [-]

But I wouldn't say that it's an ultimate motivator, because to me that sounds like it implies that there's something final and/or superseding about altruism as a motivator, and I don't think that's true.

Yes, that's exactly what I meant to imply! Finally, I used the right words. Why don't you think it's true?

I don't see how one output could be more special than another.

I did just mean "inherently different" so we're clear here. I think what makes selfishness and goodness/altruism inherently different is that other psychological motivators, if you follow them back far enough, will lead people to act in a way that they either think will make them happy or that they think will make the world a happier place.

I'm curious - was this earth shattering or just pretty cool? I got the impression that you thought that humans are completely selfish by nature.

Well, the idea of being completely selfish by nature goes so completely against my intuition, I didn't really suspect it (but I wouldn't have ruled it out entirely). The "Yay!!" was about there being evidence/logic to support my intuition being true.

I think you may be misunderstanding something about how evolution works. I see that you now understand that we evolve to be "altruistic to our genes", but it's a common and understandable error to instinctively think about society as we know it. In actuality, we've been evolving very slowly over millions of years. Prisons have only existed for, idk, a couple hundred? (I realize you might understand this, but I'm commenting just in case you didn't)

Prisons didn't exist, but enemies did, and totally selfish people probably have more enemies... so yeah, I understand :)

I've sort of decided that it doesn't make sense at all to be "offended" by anything anyone says about you.

No, you're right! Whenever someone says something and adds "no offense" I remark that there must be something wrong with me, because I never take offense at anything. I've used your exact explanation to talk about criticism. I would rather hear it than not, because there's a chance someone recognizes a bad tendency/belief that I haven't already recognized in myself. I always ask for negative feedback from people, there's no downside to it (unless you already suffer from depression, or something).

In real life, the only time I feel offended/mildly annoyed by what someone flat-out claims I'm lying, like when my old teacher said he didn't believe me that I spent years earnestly praying for a stronger faith. But even as I was mildly annoyed, I understood his perspective completely because he either had to disbelieve me or disbelieve his entire understanding of the Bible and a God who answers prayer.

Yea, me too. But isn't it really great at the same time though! Like when I first read the Sequences, it just articulated so many things that I thought that I couldn't express. And it also introduced so many new things that I swear I would have arrived at. (And also introduced a bunch of new things that I don't think I would have arrived at)

Yeah, ditto all the way! It's entirely great :) I feel off the hook to go freely enjoy my life knowing it's extremely probable that somewhere else, people like you, people who are smarter than I am, will have the ambition to think through all the good ideas and bring them to fruition.

Comment author: adamzerner 18 April 2015 06:02:12PM *  0 points [-]

I think what makes selfishness and goodness/altruism inherently different is that other psychological motivators, if you follow them back far enough, will lead people to act in a way that they either think will make them happy or that they think will make the world a happier place.

I think we've arrived at a core point here.

See my other comment:

I guess my whole idea is that goodness is kind of special. Most people seem born with it, to one extent or another. I think happiness and goodness are the two ultimate motivators. I even think they're the only two ultimate motivators. Or at least I can't think of any other supposed motivation that couldn't be traced back to one or both of these.

In a way, I think this is true. Actually, I should give more credit to this idea - yeah, it's true in an important way.

My quibble is that motivation is usually not rational. If it was, then I think you'd be right. But the way our brains produce motivation isn't rational. Sometimes we are motivated to do something... "just because". Ie. even if our brain knows that it won't lead to happiness or goodness, it could still produce motivation.

And so in a very real sense, motivation itself is often something that can't really be traced back. But I try really hard to respond to what people's core points are, and what they probably meant. I'm not precisely sure what your core point is, but I sense that I agree with it. That's the strongest statement I could make.

Unfortunately, I think my scientific background is actually harming me right now. We're talking about a lot of things that have very precise scientific meanings, and in some cases I think you're deviating from them a bit. Which really isn't too big a deal because I should be able to infer what you mean and progress the conversation, but I think I'm doing a pretty mediocre job of that. When I reflect, it difficult to deviate from the definitions I'm familiar with, which is sort of bad "conversational manners", because the only point of words in a conversation is to communicate ideas, and it'd probably be more efficient if I were better able to use other definitions.

Back to you:

Well, the idea of being completely selfish by nature goes so completely against my intuition, I didn't really suspect it (but I wouldn't have ruled it out entirely). The "Yay!!" was about there being evidence/logic to support my intuition being true.

Oh, I see.

Comment author: adamzerner 18 April 2015 02:06:27AM *  0 points [-]

So maybe preference ratios can be based mostly on happiness, but are sometimes tainted with a hint of genuine altruism?

The way I'm defining preference ratios:

Preference ratio for person X = how much you care about yourself / how much you care about person X

Or, more formally, how many units of utility person X would have to get before you'd be willing to sacrificing one unit of your own utility for him/her.

So what does altruism mean? Does it mean "I don't need to gain any happiness in order for me to want to help you, but I don't know if I'd help you if it caused me unhappiness."? Or does it mean "I want to help you regardless of how it impacts my happiness. I'd go to hell if it meant you got one extra dollar."

[When I was studying for some vocab test in middle school my cards were in alphabetical order at one point and I remember repeating a thousand times - "altruism: selfless concern for others. altruism: selfless concern for others. altruism: selfless concern for others...". That definition would imply the latter.]

Let's take the former definition. In that case, you'd want person X to get one unit of utility even if you get nothing in return, so your preference ratio would be 0. But this doesn't necessarily work in reverse. Ie. in order to save person X from losing one unit of utility, you probably wouldn't sacrifice a bajillion units of your own utility. I very well might be confusing myself with the math here.

Note: I've been trying to think about this but my approach is too simplistic and I've been countering it, but I'm having trouble articulating it. If you really want me to I could try, otherwise I don't think it's worth it. Sometimes I find math to be really obvious and useful, and sometimes I find it to be the exact opposite.

Also, what about diminishing marginal returns with donating?

This depends on the person, but I think that everyone experiences it to some extent.

Will someone even feel a noticable increase in good feelings/happiness/satisfaction giving 18% rather than 17%? Or could someone who earns 100k purchase equal happiness with just 17k and be free to spend the extra 1k on extra happiness in the form of ski trips or berries or something (unless he was the type to never eat in restaurants)? Edit: nevermind this paragraph, even if it's realistic, it's just scope insensitivity, right?

If the person is trying to maximize happiness, the question is just "how much happiness would a marginal 1k donation bring" vs. "how much happiness would a 1k vacation bring". The answers to these questions depend on the person.

Sorry, I'm not sure what you're getting at here. The person might be scope insensitive to how much impact the 1k could have if he donated it.

But similarly, let's say someone gives 12% of her income. Her personal happiness would probably be higher giving 10% to AMF and distributing 2% in person via random acts of kindness than it would giving all 12% to AMF.

Yes, the optimal donation strategy for maximizing your own happiness is different from the one that maximizes impact :)

Sooo could real altruism exist in some people and affect their preference ratios just like personal happiness does, but to a much smaller extent? Look at (1) your quote about your ambition (2) my desire to donate despite my firm belief that the happiness opportunity cost outweighs the happiness benefits (3) people who are willing to die for others and terminate their own happiness (4) people who choose to donate via effective altruism rather than random acts of kindness

2, 3 and 4 are examples of people not trying to maximize their happiness.

1 is me sometimes knowingly following an impulse my brain produces even when I know it doesn't maximize my happiness. Sadly, this happens all the time. For example, I ate Chinese food today, and I don't think that doing so would maximize my long-term happiness.

In the case of my ambitions, my brain produces impulses/motivations stemming from things including:

  • Wanting to do good.
  • Wanting to prove to myself I could do it.
  • Wanting to prove to others I could do it.
  • Social status.

Brains don't produce impulses in perfect, or even good alignment with what it expects will maximize utility. I find the decision to eat fast food as an intuitive example of this. But I don't see how this changes anything about Preferences or Goals.

Anyway, if there was an altruism mutation somewhere along the way, and altruism could shape our preferences like happiness, it would be a bit easier to understand the seeming discrepancy between preferences and terminal goals, between likes and wants. Here I will throw out a fancy new rationalist term I learned, and you can tell me if I misunderstand it or am wrong to think it might apply here... occam's razor?

I'm sorry, I'm trying to understand what you're saying but I think I'm failing. I think the problem is that I'm defining words differently than you. I'm trying to figure out how you're defining them, but I'm not sure. Anyway, I think that if we clarify our definitions, we'd be able to make some good progress.

altruism could shape our preferences like happiness

The way I'm thinking about it... think back to my operational definition of preferences in the first comment where I talk about how an action leads to a mind-state. What action leads to what mind-state depends on the person. An altruistic action for you might lead to a happy mind-state, and that same action might lead me to a neutral mind-state. So in that sense altruism definitely shapes our preferences.

I'm not sure if you're implying this, but I don't see how this changes the fact that you could choose to strive for any goal you want. That you could only say that a means is good at leading to an end. That you can't say that and end is good.

Ie. I could chose the goal of killing people, and you can't say that it's a bad goal. You could only say that it's bad at leading to a happy society. Or that it's bad at making me happy.

Here I will throw out a fancy new rationalist term I learned, and you can tell me if I misunderstand it or am wrong to think it might apply here... occam's razor?

That's a term that I don't think I have a proper understanding of. There was a point when I realized that it just means that A & B is always less likely than A, unless B = 1. Like let's say that the probability of A is .75. Even if B is .999999, P(A & B) < P(A). And so in that sense, simpler = better.

But people use it in ways that I don't really understand. Ie. sometimes I don't get what they mean by simpler. I don't see that the term applies here though.

Anyway, in case this idea is all silly and confused, and altruism is a socially conditioned emotion

I think it'd be helpful if you defined specifically what you mean by altruism. I mean, you don't have to be all formal or anything, but more specific would be useful.

As far as socially conditioned emotions goes, our emotions are socially conditioned to be happy in response to altruistic things and sad in response to anti-altruistic things. I wouldn't say that that makes altruism itself a socially conditioned emotion.

Do our internal mental states drive these decisions? Put in the same position 100 times, with the same internal mental state, would someone make the same decision every time, or would it be 50-50?

Wow, that's a great way to put it! You definitely have the head of a scientist :)

We don't know, but either way, we still feel like we make decisions (well, except when it comes to belief, in my experience anyway) so it doesn't really matter too much.

Yeah, I pretty much feel that way too.

Comment author: [deleted] 18 April 2015 03:48:23PM *  0 points [-]

Yeah, this has gotten a little too tangled up in definitions. Let's try again, but from the same starting point.

Happiness=preferred mind-state (similar, potentially interchangeable terms: satisfaction, pleasure) Goodness=what leads to a happier outcome for others (similar, potentially interchangeable terms: morality, altruism)

I guess my whole idea is that goodness is kind of special. Most people seem born with it, to one extent or another. I think happiness and goodness are the two ultimate motivators. I even think they're the only two ultimate motivators. Or at least I can't think of any other supposed motivation that couldn't be traced back to one or both of these.

Pursuing a virtue like loyalty will usually lead to happiness and goodness. But is it really the ultimate motivator, or was there another reason behind this choice, i.e. it makes the virtue ethicist happy and she believes it benefits society? I'm guessing that in certain situations, the author might even abandon the loyalty virtue if it conflicted with the underlying motivations of happiness and goodness. Thoughts?

Edit: I guess I'm realizing the way you defined preference doesn't work for me either, and I should have said so in my other comment. I would say prefer simply means "tend to choose." You can prefer something that doesn't lead to the happiest mind-state, like a sacrificial death, or here's an imaginary example:

You have to choose: Either you catch a minor cold, or a mother and child you will never meet will get into a car accident. The mother will have serious injuries, and her child will die. Your memory of having chosen will be erased immediately after you choose regardless of your choice, so neither guilt nor happiness will result. You'll either suddenly catch a cold, or not.

Not only is choosing to catch a cold an inefficient happiness-maximizer like donating to effective charities, this time it will actually have a negative effect on your happiness mind-state. Can you still prefer that you catch a cold? According to what seems to me like common real-world usage of "prefer" you can. You are not acting in some arbitrary, irrational, inexplicable way in doing so. You can acknowledge you're motivated by goodness here, rather than happiness.

Comment author: adamzerner 18 April 2015 05:45:37PM *  0 points [-]

I guess my whole idea is that goodness is kind of special. Most people seem born with it, to one extent or another. I think happiness and goodness are the two ultimate motivators. I even think they're the only two ultimate motivators. Or at least I can't think of any other supposed motivation that couldn't be traced back to one or both of these.

In a way, I think this is true. Actually, I should give more credit to this idea - yeah, it's true in an important way.

My quibble is that motivation is usually not rational. If it was, then I think you'd be right. But the way our brains produce motivation isn't rational. Sometimes we are motivated to do something... "just because". Ie. even if our brain knows that it won't lead to happiness or goodness, it could still produce motivation.

And so in a very real sense, motivation itself is often something that can't really be traced back. But I try really hard to respond to what people's core points are, and what they probably meant. I'm not precisely sure what your core point is, but I sense that I agree with it. That's the strongest statement I could make.

Unfortunately, I think my scientific background is actually harming me right now. We're talking about a lot of things that have very precise scientific meanings, and in some cases I think you're deviating from them a bit. Which really isn't too big a deal because I should be able to infer what you mean and progress the conversation, but I think I'm doing a pretty mediocre job of that. When I reflect, I find it difficult to deviate from the definitions I'm familiar with, which is sort of bad "conversational manners", because the only point of words in a conversation is to communicate ideas, and it'd probably be more efficient if I were better able to use other definitions.

Pursuing a virtue like loyalty will usually lead to happiness and goodness. But is it really the ultimate motivator, or was there another reason behind this choice

Haha, you seem to be confused about virtue ethics in a good way :)

A true virtue ethicist would completely and fully believe that their virtue is inherently desirable, independent of anything and everything else. So a true virtue ethicist who values the virtue of loyalty wouldn't care whether the loyalty lead to happiness or goodness.

Now, I think that consequentialism is a more sensible position, and I think you do too. And in the real world, virtue ethicists often have virtues that include happiness and goodness. And if they run into a conflict between say the virtue of goodness and the one of loyalty, well I don't know how they'd resolve it, but I think they'd give some weight to each, and so in practice I don't think virtue ethicists end up acting too crazy, because they're stabilized by their virtues of goodness and happiness. On the other hand, a virtue ethicist without the virtue of goodness... that could get scary.

I guess I'm realizing the way you defined preference doesn't work for me either

I hadn't thought about it before, but now that I do I think you're right. I'm not using the word "prefer" to mean what it really means. In my thought experiment I started off using it properly in saying that one mind-state is preferable to another.

But the error I made is in defining ACTIONS to be preferable because the resulting MIND-STATES are preferable. THAT is completely inconsistent with the way it's commonly used. In the way it's commonly used, an action is preferable... if you prefer it.

I'm feeling embarrassed that I didn't realize this immediately, but am glad to have realized it now because it allows me to make progress. Progress feels so good! So...

THANK YOU FOR POINTING THIS OUT!

According to what seems to me like common real-world usage of "prefer" you can.

Absolutely. But I think that I was wrong in an even more general sense than that.

So I think you understood what I was getting at with the thought experiment though - do you have any ideas about what words I should substitute in that would make more sense?

(I think that the fact that this is the slightest bit difficult is a huge failure of the english language. Language is meant to allow us to communicate. These are important concepts, and our language isn't giving us a very good way to communicate them. I actually think this is a really big problem. The linguistic-relativity hypothesis basically says that our language restricts our ability to think about the world, and I think (and it's pretty widely believed) that it's true to some extent (the extent itself is what's debated).)

Comment author: [deleted] 18 April 2015 06:40:59PM *  0 points [-]

In a way, I think this is true. Actually, I should give more credit to this idea - yeah, it's true in an important way.

Yay, agreement :)

My quibble is that motivation is usually not rational. If it was, then I think you'd be right. But the way our brains produce motivation isn't rational. Sometimes we are motivated to do something... "just because". Ie. even if our brain knows that it won't lead to happiness or goodness, it could still produce motivation.

Great point. I actually had a similar thought and added the qualifier "psychological" in my previous comment. Maybe "rational" would be better. Maybe there are still physical motivators (addiction, inertia, etc?) but this describes the mental motivators? Does this align any better with your scientific understanding of terminology? And don't feel bad about it, I'm sure the benefits of studying science outweigh the cost of the occasional decrease in conversation efficiency :)

A true virtue ethicist would completely and fully believe that their virtue is inherently desirably, independent of anything and everything else. So a true virtue ethicist who values the virtue of loyalty wouldn't care whether the loyalty lead to happiness or goodness.

Then I think very, very few virtue ethicists actually exist, and virtue ethicism is so abnormal it could almost qualify as a psychological disorder. Like the common ethics dilemma of exposing hidden Jews. If someone's virtue was "honesty" they would have to. (In the philosophy class I took, we resolved this dilemma by redefining "truth" and capitalizing; e.g. Timmy's father is a drunk. Someone asks Timmy if his father is a drunk. Timmy says no. Timmy told the Truth.) We whizzed through that boring old "correspondence theory" in ten seconds flat. I will accept any further sympathy you wish to express. Anyway, I think that any virtue besides happiness and goodness will have some loophole where 99% of people will abandon it if they run into a conflict between their chosen virtue and the deeper psychological motivations of happiness and goodness.

Edit: A person with extremely low concern for goodness is a sociopath. The amount of concern someone has for goodness as a virtue vs. amount of concern for personal happiness determines how altruistic she is, and I will tentatively call this a psychological motivation ratio, kind of like a preference ratio. And some canceling occurs in this ratio because of overlap.

But the error I made is in defining ACTIONS to be preferable because the resulting MIND-STATES are preferable. THAT is completely inconsistent with the way it's commonly used. In the way it's commonly used, an action is preferable... if you prefer it.

Yes! I wish I could have articulated it that clearly for you myself.

Instead of saying we "prefer" an optimal mind-state... you could say we "like" it the most, but that might conflict with your scientific definitions for likes and wants. But here's an idea, feel free to critique it...

"Likes" are things that actually produce the happiest, optimal mind-states within us

"Wants" are things we prefer, things we tend to choose when influenced by psychological motivators (what we think will make us happy, what we think will make the world happy)

Some things, like smoking, we neither like (or maybe some people do, idk) nor want, but we still do because the physical motivators overpower the psychological motivators (i.e. we have low willpower)

I think that the fact that this is the slightest bit difficult is a huge failure of the english language.

Absolutely!! I'll check out that link.

Comment author: adamzerner 18 April 2015 08:00:53PM *  0 points [-]

Great point. I actually had a similar thought and added the qualifier "psychological" in my previous comment. Maybe "rational" would be better. Maybe there are still physical motivators (addiction, inertia, etc?) but this describes the mental motivators? Does this align any better with your scientific understanding of terminology?

Hmmm, so the question I'm thinking about is, "what does it mean to say that a motivation is traced back to something". It seems to me that the answer to that involves terminal and instrumental values. Like if a person is motivated to do something, but is only motivated to do it to the extent that it leads to the persons terminal value, then it seems that you could say that this motivation can be traced back to that terminal value.

And so now I'm trying to evaluate the claim that "motivations can always be traced back to happiness and goodness". This seems to be conditional on happiness and goodness being terminal goals for that person. But people could, and often do choose whatever terminal goals they want. For example, people have terminal goals like "self improvement" and "truth" and "be man" and "success". And so, I think that for a person with a terminal goal other than happiness and goodness, they will have motivations that can't be traced back to happiness or goodness.

But I think that it's often the case that motivations can be traced back to happiness and goodness. Hopefully that means something.

(In the philosophy class I took, we resolved this dilemma by redefining "truth" and capitalizing; e.g. Timmy's father is a drunk. Someone asks Timmy if his father is a drunk. Timmy says no. Timmy told the Truth.) We whizzed through that boring old "correspondence theory" in ten seconds flat.

Wait... so the Timmy example was used to argue against correspondence theory? Ouch.

Anyway, I think that any virtue besides happiness and goodness will have some loophole where 99% of people will abandon it if they run into a conflict between their chosen virtue and the deeper psychological motivations of happiness and goodness.

Perhaps. Truth might be an exception for some people. Ex. some people may choose to pursue the truth even if it's guaranteed to lead to decreases in happiness and goodness. And success might also be an exception for some people. They also may choose to pursue success even if it's guaranteed to lead to decreases in happiness and goodness. But this becomes a question of some sort of social science rather than of philosophy.

The amount of concern someone has for goodness as a virtue vs. amount of concern for personal happiness determines how altruistic she is, and I will tentatively call this a psychological motivation ratio, kind of like a preference ratio.

I like the concept! I propose that you call it an altruism ratio as opposed to a psychological motivation ratio because I think the former is less likely to confuse people.

Instead of saying we "prefer" an optimal mind-state... you could say we "like" it the most, but that might conflict with your scientific definitions for likes and wants.

Eh, I think that this would conflict with the way people use the word "like" in a similar way to the problems I ran into with "preference". For example, it makes sense to say that you like mind-state A more than mind-state B. But I'm not sure that it makes sense to say that you necessarily like action A more than action B, given the way people use the term "like". Damn language! :)

Comment author: [deleted] 18 April 2015 08:32:40PM *  0 points [-]

And so now I'm trying to evaluate the claim that "motivations can always be traced back to happiness and goodness". This seems to be conditional on happiness and goodness being terminal goals for that person.

I had just reached the same conclusion myself! So I think that yeah, happiness and goodness are the only terminal values, for the vast majority of the thinking population :)

Note: I really don't like the term "happiness" to describe the optimal mind-state since I connect it too strongly with "pleasure" so maybe "satisfaction" would be better. I think of satisfaction as including both feelings of pleasure and feelings of fulfillment. What do you think?

For example, people have terminal goals like "self improvement" and "truth" and "be man" and "success"

I think that all these are really just instrumental goals that people subconsciously, and perhaps mistakenly, believe will lead them to their real terminal goals of greater personal satisfaction and/or an increase in the world's satisfaction.

Wait... so the Timmy example was used to argue against correspondence theory? Ouch.

It was an example of whatever convoluted theory my professor invented as a replacement for correspondence theory.

But this becomes a question of some sort of social science rather than of philosophy.

Exactly. I think people like the ones you mention are quite rare.

I like the concept! I propose that you call it an altruism ratio as opposed to a psychological motivation ratio because I think the former is less likely to confuse people.

Ok, thanks :)

Eh, I think that this would conflict with the way people use the word "like" in a similar way to the problems I ran into with "preference". For example, it makes sense to say that you like mind-state A more than mind-state B. But I'm not sure that it makes sense to say that you necessarily like action A more than action B, given the way people use the term "like". Damn language! :)

What if language isn't the problem? Maybe the connection between mind-states and actions isn't so clear-cut after all. If you like mind-state A more than mind-state B, then action A is mind-state-optimizing, but I'm not sure you can go much farther than that... because goodness.

Comment author: adamzerner 18 April 2015 10:54:18PM *  0 points [-]

I had just reached the same conclusion myself! So I think that yeah, happiness and goodness are the only terminal values, for the vast majority of the thinking population :)

:)

Note: I really don't like the term "happiness" to describe the optimal mind-state since I connect it too strongly with "pleasure" so maybe "satisfaction" would be better. I think of satisfaction as including both feelings of pleasure and feelings of fulfillment. What do you think?

I haven't found a term that I really like. Utility is my favorite though.

I think that all these are really just instrumental goals that people subconsciously, and perhaps mistakenly, believe will lead them to their real terminal goals of greater personal satisfaction and/or an increase in the world's satisfaction.

Idk, I want to agree with you but I sense that it's more like 95% of the population. I know just the 2 people to ask though. My two friends are huge proponents of things like "give it your all" and "be a man".

Also, what about religious people? Aren't there things they value independent of happiness and goodness? And if so wouldn't their motivations reflect that?

Edit:

Friend 1 says it's ultimately about avoiding feeling bad about himself, which I classify as him wanting to optimize his mind-state.

Friend 2 couldn't answer my questions and said his decisions aren't that calculated.

Not too useful after all. I was hoping that they'd be more insightful.

mind-state-optimizing

Oooooo I like that term!

Maybe the connection between mind-states and actions isn't so clear-cut after all.

It seems clear-cut to me. An action leads to one state of the world, and in that state of the world you have one mind-state. Can you elaborate?

but I'm not sure you can go much farther than that... because goodness.

Not sure what you mean by that either.

Comment author: [deleted] 19 April 2015 06:55:21PM *  0 points [-]

Idk, I want to agree with you but I sense that it's more like 95% of the population. I know just the 2 people to ask though. My two friends are huge proponents of things like "give it your all" and "be a man"

Yeah, ask those friends if in a situation where "giving it their all" and "being men" made them less happy and made the world a worse place, whether they would still stick with their philosophies. And if they genuinely can't imagine a situation where they would feel less satisfied after "giving it their all," then I would postulate that as they're consciously pursuing these virtues, they're subconsciously pursuing personal satisfaction. (Edit: Just read a little further, that you already have their responses. Yeah, not too insightful, maybe I'll develop this idea a bit more and ask the rest of the LW community what they think.) (Edit #2: Thought about this a little more, and I have a question you might be able to answer. Is the subconscious considered psychological or physical?)

As for religious people...well, in the case of Christianity, people would probably just want to "become Christ-like" which, for them, overlaps really well with personal satisfaction and helping others. But in extreme cases, someone might truly aspire to "become obedient to X" in which case obedience could be the terminal value, even if the person doesn't think obedience will make them happy or make the world a better place. But I think that such ultra-religiosity is rare, and that most people are still ultimately psychologically motivated to either do what they think will make them happy, or what they think will make the world a better place. I feel like this is related to Belief in Belief but I can't quite articulate the connection. Maybe you'll understand, if not, I'll try harder to verbalize it.

It seems clear-cut to me. An action leads to one state of the world, and in that state of the world you have one mind-state.

No, if that's all you're saying, that "If you like mind-state A more than mind-state B, then action A is mind-state-optimizing" then I completely agree! For some reason, I read your sentence ("But I'm not sure that it makes sense to say that you necessarily like action A more than action B, given the way people use the term "like") and thought you were trying to say they necessarily like action A more..haha, oops

Comment author: [deleted] 18 April 2015 09:08:33PM *  0 points [-]

I've tried to clarify my thoughts a bit:

Terminal values are ends-in-themselves. They are psychological motivators, reasons that explain decisions. (Physical motivators like addiction and inertia can also explain our decisions, but a rational person might wish to overcome them.) For most people, the only true terminal values are happiness and goodness. There is almost always significant overlap between the two. Someone who truly has a terminal value that can't be traced back to happiness or goodness in some way is either (a) ultra-religious or (b) a special case for the social sciences.

Happiness ("likes") refers to the optimalness of your mind-state. Hedonistic pleasure and personal fulfillment are examples of things that contribute to happiness.

Goodness refers to what leads to a happier outcome for others.

Preferences ("wants") are what we tend to choose. These can be based on psychological or physical motivators.

Instrumental values are goals or virtues that we think will best satisfy the terminal values of happiness and goodness.

We are not always aware of what actually leads to optimal mind-states in ourselves and others.

Comment author: adamzerner 18 April 2015 10:56:58PM *  0 points [-]

Sounds good to me! Given the way you've defined things.

Edit: So what do you conclude about morality from this?