You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

els comments on On not getting a job as an option - Less Wrong Discussion

36 Post author: diegocaleiro 11 March 2014 02:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 15 April 2015 05:14:46AM *  0 points [-]

I doubt it. In my experience, the average person is quite stupid.

Okay, yeah, I should have added the word some. Kaczynski is the only psychopath I've really read much about, so maybe I really did extrapolate his seeming rationality onto other psychopaths, even though we probably never hear about 99% of them. That would have to be some kind of bias; out of curiosity how would you label it? Maybe survivorship bias? Or availability heuristic? Anchoring? Or maybe even all of the above?

You may need a lot less money to retire than you'd think.

Believe me, I know. Even without trying to save money, I actually end up spending less on myself (excluding having paid for college) than on charity. Free hobbies are great. I didn't mean a pension was a reason to become a detective; it would just be a nice perk. Thanks for the link, though. Lots of good articles on that site!

Most people use the term intelligence to refer to things like aptitude, working memory size and ability to remember things. I think that those things are overrated and that the ability to break things down like a reductionist is underrated.

Well, I'm biased in favor of this idea, since I have an awful memory, but a pretty good ability (sometimes too good for my own good) to break things down like a reductionist and dissolve topics. I'll check out your post tomorrow and try to give some feedback.

even though I think I'm an amazing writer :)

I think so too!

I actually don't even think there's that much to say.

Nope, there's really not, but another thing I've realized from reading SSC is that a major component of great writing (and teaching) is the sharing of relevant, interesting, relatable examples to help an idea. If you skillfully parse through an idea, the audience will probably understand it at the time. But if you want the idea to actually sink in and stick with them, great examples are key. This is one reason I like Scott's posts so much; they actually affect my life. Personally, I was borderline cocky when I was younger (but followed social norms and concealed it). Then, I got older and started to read more and more, moved to the Bay Area, and met loads of smart people. Because of this, my self-esteem began to plummet, but I read that article just in time to stabilize it at a healthy, realistic level.

Anyway, Scott allows people to go easy on themselves for contributing less to the world than they might like, relative to their innate ability. Can we also go easy on ourselves relative to innate conscientiousness?

people fall victim to scope insensitivity

Yeah, this is sooo real. On a logical level, it's easy to recognize my scope insensitivity. On a "feeling" level, I still don't feel like I have to go out and do something about it. But I don't want to admit my preference ratios are that far out of whack; I don't want to be that selfish. Ugh. Now I feel like I should do something ambitious again, I'm so waffley about this. Thanks for all the help thinking through everything. This is BY FAR the best guidance anyone has ever given me in my life.

I'm confused. If you assume that dying is bad, you have a lot to lose (proportional to the badness of dying). Are you considering death to be a neutral event?

No... sorry, I was just working through my first thoughts about the idea, not making a meaningful point. Continuing on the selfishness idea, all I meant was that the researchers themselves would surely die eventually without AI, so even if AI made the world end a few years earlier for them, they personally have nothing to lose relative to what they could gain (dying a few years earlier vs. living forever). My first thought was "that's selfish, in a bad way, since they care less than the bajillions of still unborn people would about whether humans go extinct" but then I extrapolated the idea that the researcher would die without AI to the idea that humanity would eventually go extinct without AI and decided it was selfish in a good way.

Anyway, another question for you. You know how you said we care only about our own happiness? Have you read the part of the sequences/rationality book where Eliezer brings up someone being willing to die for someone else? If so, what did you make of it? If not, I'll go back and find exactly where it was.

Comment author: adamzerner 15 April 2015 02:40:26PM *  1 point [-]

Kaczynski is the only psychopath I've really read much about, so maybe I really did extrapolate his seeming rationality onto other psychopaths

I don't know too much about him other than the basics ("he argued that his bombings were extreme but necessary to attract attention to the erosion of human freedom necessitated by modern technologies requiring large-scale organization").

I think that his concerns are valid, but I don't see how the bombings help him achieve the goal of bumping humanity off that path. Perhaps he knew he'd get caught and his manifesto would get attention, but a) there's still a better way to achieve his goals, and b) he should have realized that people have a strong bias against serial killers.

The reason I think his concerns are valid is because capitalism tries to optimize for wanting, which is sometimes quite different from liking. And anecdotally, this seems to be a big problem.

That would have to be some kind of bias; out of curiosity how would you label it? Maybe survivorship bias? Or availability heuristic? Anchoring? Or maybe even all of the above?

I'm not sure what the bias is called :/. I know it exists and there's a formal name though. I know because I remember someone calling me out on it LWSH :)

Nope, there's really not, but another thing I've realized from reading SSC is that a major component of great writing (and teaching) is the sharing of relevant, interesting, relatable examples to help an idea.

Yes, I very much agree. At times I think the articles on LW fail to do this. Humans need to have their System 1's massaged in order to understand things intuitively.

Anyway, Scott allows people to go easy on themselves for contributing less to the world than they might like, relative to their innate ability. Can we also go easy on ourselves relative to innate conscientiousness?

Idk. This seems to be a question involving terminal goals. Ie. if you're asking whether our innate conscientiousness makes us "good" or "bad".

When I think of morality this is the/one question I think of: "What are the rules we'd ask people to follow in order to promote the happiest society possible?". I'm sure you could nitpick at that, but it should be sufficient for this conversation. Example: the law against killing is good because if we didn't have it, society would be worse off. Similarly, there are norms of certain preference ratios that lead to society being better off.

I don't think we'd be better off if the norm was to have, say equal preference ratios for everyone in the world. Doing so is very unnatural would be very difficult, if not impossible. You have to weigh the costs of going against our impulses against the benefits that marginal conscientiousness would bring.

I'm not sure where the "equilibrium" points are. Honestly, I think I'd be lying to myself if I said that a preference ratio of 1,000,000,000:1 for you over another human would be overall beneficial to society. I suspect that subsequent generations will realize this and look at us in a similar way we look at Nazis (maybe not that bad, but still pretty bad). Morality seems to "evolve" from generation to generation.

Personally, my preference ratios are pretty bad. Not as bad as the average person because I'm less scope insensitive, but still bad. Ex. I eat out once in a while. You might say "oh well that's reasonable". But I could eat brown rice and frozen vegetables for very cheap and be like 70% as satisfied, and pay for x meals for people that are quite literally starving.

But I continue to eat out once in a while, and honestly, I don't feel (that) bad about it. Because I accept that my preference ratios are where they are (pretty much), and I think it makes sense for me to pursue the goal of achieving my preferences. To be less precise and more blunt, "I accept that I'm selfish".

And so to answer your question:

Can we also go easy on ourselves relative to innate conscientiousness?

I think that the answer is yes. Main reason: because it's unreasonable to expect that you change your ratios much.

Yeah, this is sooo real. On a logical level, it's easy to recognize my scope insensitivity. On a "feeling" level, I still don't feel like I have to go out and do something about it.

It's great that you understand it on a logical level. No one has made much progress on the feeling level. As long as you're aware of the bias and make an effort to massage your "feeling level" towards being more accurate, you should be fine.

But I don't want to admit my preference ratios are that far out of whack; I don't want to be that selfish.

Why?

I think that answering that exploring and answering that question will be helpful.

Try thinking about it in two ways:

1) A rational analysis of what you genuinely think makes sense. Note that rational does not mean completely logically.

2) An emotional analysis of what you feel, why you feel it, and in the event that your feelings aren't accurate, how can you nudge them to be more accurate.

This is BY FAR the best guidance anyone has ever given me in my life.

Wow! Thanks for letting me know. I'm really happy to help. I've been really impressed with your ability to pursue things, even when it's uncomfortable. It's a really important ability and most people don't have it.

I think that not having that ability is often a bottleneck that prevents progress. Ex. an average person with that ability can probably make much more progress than a high IQ person without it (in some ways). It's nice to have a conversation that actually progresses along nicely.

Anyway, another question for you. You know how you said we care only about our own happiness? Have you read the part of the sequences/rationality book where Eliezer brings up someone being willing to die for someone else? If so, what did you make of it? If not, I'll go back and find exactly where it was.

I think I have. I remember it being one of the few instances where it seemed to me that Eliezer was misguided. Although:

1) I remember going through it quickly and not giving it nearly as much thought as I would like. I'm content enough with my current understanding, and busy enough with other stuff that I chose to put it off until later. Although I do notice confusion - I very well may just be procrastinating.

2) I have tremendous respect for Eliezer. And so I definitely take note of his conclusions. The following thoughts are a bit dark and I hesitate to mention them... but:

a) Consider the possibility that he does actually agree with me, but he thinks that what he wrote will have a more positive impact on humanity (by influencing readers)

b) In the case that he really does believe what he writes, consider that it may not be best to convince him otherwise. Ie. he seems to be a very influential person in the field of FAI, and it's very much in humanities interest for that person to be unselfish.

I haven't thought this through enough to make these points public, so please take note of that. Also, if you wouldn't mind summarizing/linking to where and why he disagrees with me, I'd very much appreciate it.

Edit: Relevant excerpt from HPMOR

They both laughed, then Harry turned serious again. "The Sorting Hat did seem to think I was going to end up as a Dark Lord unless I went to Hufflepuff," Harry said. "But I don't want to be one."

"Mr. Potter..." said Professor Quirrell. "Don't take this the wrong way. I promise you will not be graded on the answer. I only want to know your own, honest reply. Why not?"

Harry had that helpless feeling again. Thou shalt not become a Dark Lord was such an obvious theorem in his moral system that it was hard to describe the actual proof steps. "Um, people would get hurt?"

"Surely you've wanted to hurt people," said Professor Quirrell. "You wanted to hurt those bullies today. Being a Dark Lord means that people you want to hurt get hurt."

Sorry, I feel like I'm linking to too many things which probably feels overwhelming. Don't feel like you have to read anything. Just thought I'd give you the option.

Comment author: [deleted] 15 April 2015 09:06:52PM *  0 points [-]

b) he should have realized that people have a strong bias against serial killers.

Yeah, this was irrational. He should have remembered his terminal value of creating change instead of focusing on his instrumental value of getting as many people as possible to read his manifesto. -gives self a little pat on back for using new terminology-

The reason I think his concerns are valid is because capitalism tries to optimize for wanting

Could you please elaborate on this idea a little? Anyway, thanks for the link (don't apologize for linking so much, I love the links and read through and try to digest about 80% of them...). The liking/wanting difference is intuitive, but actually putting it into words is really helpful. I'm interested in exactly how you tie it in with Kaczynski, and I also think it's relevant to my current dilemma.

Anyway, Scott's example about smoking makes it seem as if people want to smoke but don't like it. I think it's the opposite; they like smoking, but don't want to smoke. Do I really have these two words backwards? We need definitions. I think "liking" has more to do with your preferences, while "wanting" has to do with your goals. I recognize in myself, that if I like something, it's very hard for me not to want it, and personally I find matrix-type philosophy questions to actually be difficult. That's why I've never tried smoking; I was scared I might like it and start to want it. Without having tried it, it's easy to say that it's not what I want for myself. Is this only because I think it would bring me less happiness in the long run? I don't think so. Even if you told me with certainty that smoking (or drugs) feels so incredibly good and is so incredibly fun that it could bring me happiness that outweighs the unhappiness caused by the bad stuff, I still wouldn't want it! And I have no idea why. Which makes me wonder... what if I had never experienced how wonderful a fun-filled mostly-hedonic lifestyle is? Would I truly want it? Or am I just addicted?

You might say "oh well that's reasonable". But I could eat brown rice and frozen vegetables for very cheap and be like 70% as satisfied, and pay for x meals for people that are quite literally starving.

Funny that you mention this example; I wouldn't say it's reasonable. Let me share a little story. When I was way younger, maybe 10 years ago, I went through a brief phase where I tried to convince my friends and family that eating at restaurants was wrong, saying "What if there were children in pain from starvation right outside the restaurant, and you knew the money you would spend in the restaurant could buy them rice and beans for two weeks... you would feel guilty about eating at the restaurant instead of helping, right? ("yes") This is your conscience, right? ("yes") Your conscience is from God, right? ("yes") People in Africa are just as important as people in the US, right? ("yes") Therefore, isn't wrong to eat at a restaurant instead of donating the money to help starving kids in Africa? ("no") Why? ("it just isn't!")... at which point they would insist that if I truly believed this was wrong, I should act accordingly, and I just told them "No, I can't, I'm too selfish... and besides, saving eternal souls is more important than feeding starving children." Then I looked at all the smart, unselfish adults I knew who still ate at restaurants, told myself I must be wrong somehow, and avoided thinking about the issue until we read Singer's Famine, Affluence, and Morality in college (In my final semester, this was the class where it first occurred to me that there was nothing wrong with putting effort into school beyond what was necessary for perfect grades). I was really excited when we read it and was eagerly anticipating discussing it the next class to finally hear if someone could give a solid refutation of my old idea. My professor cancelled class that day, and we never went back to the topic. I cared, but unfortunately not quite enough to go talk to my professor outside of class. That was for nerds. So I went on believing it was "wrong" to eat in restaurants, but to protect my sanity, didn't think about it or do anything about it, even after de-converting from Christianity... until I came across Scott's post Nobody Is Perfect, Everything is Commensurable which seems incredibly obvious in hindsight, yet was exactly what I needed to hear at the time.

I don't think we'd be better off if the norm was to have, say equal preference ratios for everyone in the world.

I disagree. I think we would be better off if society could somehow advance to a stage where such unselfishness was the norm. Whether this is possible is another question entirely, but I keep trying to rid myself of the habit of thinking natural = better (personally, I see this habit as another effect of Christianity; I'm continually amazed to find just how much of my worldview it shaped).

I think that answering that exploring and answering [Why don't I want selfish preference ratios?] will be helpful.

I want to answer this question with "because emotion!" Is this allowed? Or is it akin to "explaining" something by calling it an emergent phenomenon?

1) Rationally, I can't trace this back any farther than calling it a feeling. Was I born with this feeling? Is it the result of society? I don't know. I don't honestly think unselfish preference ratios would lead to a personal increase in my overall happiness, that's for sure. Take effective altruism, for example. When I donate money, I don't feel warm and fuzzy. I get a very small amount of personal satisfaction, societal respect, and a tiny reduction in the (already very small) guilt I feel for having such a good life. But honestly I rarely think about it, and I'm 99.99% sure the overall impact on my happiness is much smaller than if I were to use the money to fly to Guatemala and take a few weeks' vacation to visit old friends. Yet, even as I acknowledge this, I still want to donate. I don't know why. So I think that based solely on my intuition here, I might disagree with you and find personal happiness and altruism to be two separate terminal goals, often harmonious but sometimes conflicting.

2) Analyze emotion?? Can you do that?! As an istp, just identifying emotion is difficult enough.

As for your points about Eliezer...

a) Yeah, I have considered this too. But I think most of his audience is rational enough that if he said something that wasn't rational, his credibility could take a hit. Whether this would stop him and how much of a consequentialist he really is, I have no idea.

b) Yeah, this is an interesting microcosm of the issue of whether we want to believe what is true vs. what is best for society. That said, I'm not saying Eliezer is wrong. My intuition does take his side now, but I usually don't trust my intuitions very much.

Anyway, I went back through the book and found the title of the post. It's Terminal Values and Instrumental Values. You can jump to "Consider the philosopher."

Harry had that helpless feeling again. Thou shalt not become a Dark Lord was such an obvious theorem in his moral system that it was hard to describe the actual proof steps. "Um, people would get hurt?"

"Surely you've wanted to hurt people," said Professor Quirrell. "You wanted to hurt those bullies today. Being a Dark Lord means that people you want to hurt get hurt."

Good quote! Right now, I interpret this as showing how personal happiness and "altruism/not becoming a Dark Lord" are both inexplicable, perhaps sometimes competing terminal values... how do you interpret it?

Comment author: adamzerner 16 April 2015 01:32:08AM *  0 points [-]

Could you please elaborate on this idea a little? ... I'm interested in exactly how you tie it in with Kaczynski, and I also think it's relevant to my current dilemma.

Sure!

In brief: Kaczynski seems to have realized that economies are driven by wanting, not liking, and that this will lead to unhappiness. I think that that conclusion is too strong though - I'd just say that it'll lead to inefficiency.

Longer explanation: ok, so the economy is pretty much driven by what people choose to buy, and where people choose to work. People aren't always so good at making these choices. One reason is because they don't actually know what will make them happy.

  • Example: job satisfaction is important. There are lots of subtle things that influence job satisfaction. For example, there's something about things like farming that produces satisfaction and contentment. People don't value these things enough -> these jobs disappear -> people miss out on the opportunity to be satisfied and content.

Another reason why people aren't good at making choices is because they don't always have the willpower to do what they know they should.

  • Example: if people were smart, McDonalds wouldn't be the huge empire that it is. People choose to eat at McDonalds because they don't weigh the consequences it has on their future selves enough. The reason why McDonalds is huge is because tons of people make these mistakes. If people were smart, MealSquares and McDonalds would be flip-flopped.

Kaczynski seems to focus more on the first example, but I think they're both important. Economies are driven by the decisions we make. Given the predictable mistakes people make, society will suffer in predictable ways. Kaczynski seems to have realized this.

I avoided using the terms "wanting" and "liking" on purpose. I'll just say quickly words are just symbols that refer to things and as long as the two people are using the same symbol-thing mappings, it doesn't matter. What's important is that you seem to understand the distinction between the two things as far as wanting/liking goes. I do see what you mean about the term "wanting", and now that I think about it I agree with you.

(I've avoided elaboration and qualifiers in favor of conciseness and clarity. Let me know if you want me to say more.)

Edit: I'm about 95% sure that there's actual neuroscience research behind the wanting vs. liking thing. Ie. they've found distinct a brain area that corresponds to wanting, and they've found a different distinct brain area that corresponds to liking.

Note: I studied neuroscience in college. I did research in a lab where we studied vision in monkeys, and part of this involved stimulating the monkeys brain. There was a point where we were able to get the monkey to basically make any eye movement we want (based on where and how much we stimulated). It didn't provide me with any new information as far as free will goes, but literally seeing it in person with my own eyes influenced me on an emotional level.

That's why I've never tried smoking; I was scared I might like it and start to want it.

Interesting, I've never smoked, drank or done any drugs at all for similar reasons. Well, that's part of the story.

Would I truly want it? Or am I just addicted?

I'm going to guess that the reason why you wouldn't want to do drugs even if you knew they'd make you happy is because a) it'd sort of numb you away from thinking critically and making decisions, and b) you wouldn't get to do good for the world. Your current lifestyle doesn't seem to be preventing you from doing either of those.

"What if there were children in pain from starvation right outside the restaurant, and you knew the money you would spend in the restaurant could buy them rice and beans for two weeks... you would feel guilty about eating at the restaurant instead of helping, right?

:) I've proposed the same thought experiment except with buying diamonds. Eg. "Imagine that you go to the diamond store to buy a diamond, and there were x thousand starving kids in the parking lot who you could save if you spent the money on them instead. Would you still buy the diamond?"

And in the case of diamonds, it's not only a) the opportunity cost of doing good with the money - it's that b) you're supporting an inhumane organization and c) you're being victim to a ridiculous marketing scheme that gets you to pay tens of thousands of dollars for a shiny rock. The post Diamonds are Bullshit on Priceonomics is great.

Furthermore, people do a, b and c in the name of love. To me, that seems about as anti-love as it gets. Sorry, this is a pet peeve of mine. It's amazing how far you could push a human away from what's sensible. If I had an online dating profile, I think it'd be, "If you still think you'd want a diamond after reading this, then I hate you. If not, let's talk."

I know I haven't acknowledged the main counterargument, which is that the sacrifice is a demonstration of commitment, but there are ways of doing that without doing a, b and c.

Why? ("it just isn't!")

That sort of thinking baffles me as well. I've tried to explain to my parents what a cost-benefit analysis is... and they just don't get it. This post has been of moderate help to me because I understood what virtue ethics are after reading it (and I never understood what it is before reading it)

People who say "it just isn't" don't think in terms of cost-benefit analyses. They just have ideas about what is and isn't virtuous. As people like us have figured out, if you follow these virtues blindly, you'll run into ridiculousness and/or inconsistency.

However, this isn't to say that virtue-driven thinking doesn't have it's uses. Like all heuristics, they trade accuracy for speed, which sometimes is a worthy trade-off.

I disagree. I think we would be better off if society could somehow advance to a stage where such unselfishness was the norm.

I'm glad to hear you disagree :) But I sense that I may not have explained what I think and why I think it. If you could just flip a switch and make everyone have equal preference ratios, I think that'd probably be a good thing.

What I'm trying to say is that there is no switch, and that making our preference ratios more equal would be very difficult. Ex. try to make yourself care as much about a random accountant in China as much as you do about, say your Aunt. As far as cost-benefit analysis goes, the effort and unease of doing this would be a cost. I sense that the costs aren't always worth the benefits, and that given this, it's socially optimal for us to accept our uneven preference ratios to some extent. Thoughts?

Good quote! Right now, I interpret this as showing how personal happiness and "altruism/not becoming a Dark Lord" are both inexplicable, perhaps sometimes competing terminal values... how do you interpret it?

I interpret it as "Harry seems to think there are good reasons for choosing certain terminal values. Terminal values seem arbitrary to me."

Comment author: [deleted] 17 April 2015 06:08:28AM 0 points [-]

(I've avoided elaboration and qualifiers in favor of conciseness and clarity. Let me know if you want me to say more.)

Nope, your longer explanation was perfect, and now I understand, thanks. I'm just a little curious why you would say those things lead to inefficiency instead of unhappiness, but you don't have to elaborate any more here unless you feel like it.

Well, that's part of the story.

Again, now I'm slightly curious about the rest of it...

I'm going to guess that the reason why you wouldn't want to do drugs even if you knew they'd make you happy is because a) it'd sort of numb you away from thinking critically and making decisions, and b) you wouldn't get to do good for the world. Your current lifestyle doesn't seem to be preventing you from doing either of those.

Good guess. You're right. But (I initially thought) smoking would hardly prevent those things, and I still don't want to smoke. Then again, addiction could interfere with a), and the opportunity cost of buying cigarettes could interfere with b).

I've proposed the same thought experiment except with buying diamonds.

No way! A while back, I facebook-shared a very similar link about the ridiculousness of the diamond marketing scheme and proposed various alternatives to spending money on a diamond ring. I wasn't even aware that the organization was inhumane.. yikes, information like that should be common knowledge. Also, probably at least some people don't really want to get a diamond ring... but by the time the relationship gets serious, they can't get themselves to bring it up (girls don't want to be presumptuous, guys don't want to risk a conflict?) so yeah, definitely a good kind of thing to get out of the way in a dating profile, haha.

This post has been of moderate help to me because I understood what virtue ethics are after reading it.

Wow, that's so interesting, I'd never heard of virtue ethics before. I have many thoughts/questions about this, but let's save that conversation for another day so my brain doesn't suffer an overuse injury. My inner virtue-ethicist wants to become a more thoughtful person, but I know myself well enough to know that if I dive into all this stuff head first, it will just end up to be "a weird thinking phase I went through once" and instrumentally, I want to be thoughtful because of my terminal value of caring about the world. (My gut reaction: Virtues are really just instrumental values that make life convenient for people whose terminal values are unclear/intimidating. (Like how the author of the link chose loyalty as a virtue. I bet we could find a situation in which she would abandon that loyalty.) But I also think that there's a place for cost-benefit analysis even within virtue ethics, and that virtue ethicists with thoughtfully-chosen virtues can be more efficient consequentialists, which probably doesn't make much sense, but I'd like to be both, please!)

If you could just flip a switch and make everyone have equal preference ratios, I think that'd probably be a good thing...it's socially optimal for us to accept our uneven preference ratios to some extent. Thoughts?

Oh, yeah, that makes sense to me. Kind of like capitalism, it seems to work better in practice if we just acknowledge human nature. But gradually, as a society, we can shift the preference ratios a bit, and I think we maybe are. :) We can point to a decrease in imperialism, the budding effective altruism movement, or even veganism's growing popularity as examples of this shifting preference ratio.

Comment author: adamzerner 17 April 2015 10:51:46PM *  0 points [-]

Nope, your longer explanation was perfect, and now I understand, thanks. I'm just a little curious why you would say those things lead to inefficiency instead of unhappiness, but you don't have to elaborate any more here unless you feel like it.

I didn't mean anything deep by that. Inefficiency just means "less than optimal" (or at least that's what I mean by it). For him to say that it will lead to actual unhappiness would mean that the costs are so great that they overcome any associated benefits and push whatever our default state is down until it reaches actual unhappiness. I suspect that the forces aren't strong enough to push us too far off our happiness "set points".

Again, now I'm slightly curious about the rest of it...

Just did a write up here. How convenient.

I wasn't even aware that the organization was inhumane.. yikes

Yeah, it is. Check out the movie Blood Diamond and the song Conflict Diamonds. Not the most formal sources, but at least it'll be entertaining :)

Re: virtue ethics

It seems that you don't want to think about this now. If you end up thinking about it in the future, let me know - I'd love to hear your thoughts!

Comment author: [deleted] 18 April 2015 01:15:29AM *  0 points [-]

Just did a write up here. How convenient.

I like your point about being afraid/ashamed to do something and the two cases in general and with regard to drinking as a social lubricant.

I'll post my drinking experience over there too, though I don't have too much to say.

Not the most formal sources, but at least it'll be entertaining :)

Haha, ok

It seems that you don't want to think about this now. If you end up thinking about it in the future, let me know - I'd love to hear your thoughts!

How convenient. I thought about it a bit more after all. I actually still like my initial idea of virtues being instrumental values. I commented on the link you sent me, but a lot of my comment is similar to what I commented here yesterday...

Comment author: adamzerner 18 April 2015 02:42:03AM 0 points [-]

I actually still like my initial idea of virtues being instrumental values.

As a consequentialist, that's how I'm inclined to think of it too. But I think it's important to remember that non-consequentialists actually think of virtues as having intrinsic value. Of being virtuous.

Comment author: adamzerner 16 April 2015 01:22:27AM *  0 points [-]

For reference:

But I don't want to admit my preference ratios are that far out of whack; I don't want to be that selfish.

Why?

I think that exploring and answering that question will be helpful.

Try thinking about it in two ways:

1) A rational analysis of what you genuinely think makes sense. Note that rational does not mean completely logically.

2) An emotional analysis of what you feel, why you feel it, and in the event that your feelings aren't accurate, how can you nudge them to be more accurate.

You:

I want to answer this question with "because emotion!" Is this allowed?

Also:

Analyze emotion?? Can you do that?! As an istp, just identifying emotion is difficult enough.

Absolutely! That's how I'd start off. But the question I was getting at is "why does your brain produce those emotions". What is the evolutionary psychology behind it? What events in your life have conditioned you to produce this emotion?

By default, I think it's natural to give a lot of weight to your emotions and be driven by them. But once you really understand where they come from, I think it's easier to give them a more appropriate weight, and consequently, to better achieve your goals. (1,2,3)

And you could manipulate your emotions too. Examples: You'll be less motivated to go to the gym if you lay down on the couch. You'll be more motivated to go to the gym if you tell your friends that you plan on going to the gym every day for a month.

So I think that based solely on my intuition here, I might disagree with you and find personal happiness and altruism to be two separate terminal goals, often harmonious but sometimes conflicting.

So you don't think terminal goals are arbitrary? Or are you just proclaiming what yours are?

Edit:

But honestly I rarely think about it, and I'm 99.99% sure the overall impact on my happiness is much smaller than if I were to use the money to fly to Guatemala and take a few weeks' vacation to visit old friends. Yet, even as I acknowledge this, I still want to donate. I don't know why.

Are you sure that this has nothing to do with maximizing happiness? Perhaps the reason why you still want to donate is to preserve an image you have of yourself, which presumably is ultimately about maximizing your happiness.

(Below is a thought that ended up being a dead end. I was going to delete it, but then I figured you might still be interested in reading it.)

Also, an interesting thought occurred to me related to wanting vs. liking. Take a person who starts off with only the terminal goal of maximizing his happiness. Imagine that the person then develops an addiction, say to smoking. And imagine that the person doesn't actually like smoking, but still wants to smoke. Ie. smoking does not maximize his happiness, but he still wants to do it. Should he then decide that smoking is a terminal goal of his?

I'm not trying to say that smoking is a bad terminal goal, because I think terminal goals are arbitrary. What I am trying to say is that... he seems to be actually trying to maximize his happiness, but just failing at it.

DEAD END. That's not true. Maybe he is actually trying to maximize his happiness, maybe he isn't. You can't say whether he is or he isn't. If he is, then it leads you to say "Well if your terminal goal is ultimately to maximize your happiness... then you should try to maximize your happiness (if you want to achieve your terminal goals)." But if he isn't (just) trying to maximize happiness, he could add in whatever other terminal goals he wants. Deep down I still notice a bit of confusion regarding my conclusion that goals are arbitrary, and so I find myself trying to argue against it. But every time I do I end up reaching a dead end :/

Anyway, I went back through the book and found the title of the post. It's Terminal Values and Instrumental Values. You can jump to "Consider the philosopher."

Thank you! That does seem to be a/the key point in his article. Although "I value the choice" seems like a weird argument to me. I never thought of it as a potential counter argument. From what I can gather from Eliezer's cryptic rebuttal, I agree with him.

I still don't understand what Eliezer would say to someone that said, "Preferences are selfish and Goals are arbitrary".


1- Which isn't to imply that I'm good at this. Just that I sense that it's true and I've had isolated instances of success with it.

2 - And again, this isn't to imply that you shouldn't give emotions any weight and be a robot. I used to be uncomfortable with just an "intuitive sense" and not really understanding the reasoning behind it. Reading How We Decide changed that for me. 1) It really hit me that there is "reasoning" behind the intuitions and emotions you feel. Ie. your brain does some unconscious processing. 2) It hit me that I need to treat these feelings as Bayesian evidence and consider how likely it is that I have that intuition when the intuition is wrong vs. how likely it is that I have the intuition when the intuition is right.

3 - This all feels very "trying-to-be-wise-sounding", which I hate. But I don't know how else to say it.

Comment author: [deleted] 17 April 2015 06:10:29AM *  0 points [-]

Oops, just when I thought I had the terminology down. :( Yeah, I still think terminal values are arbitrary, in the sense that we choose what we want to live for.

So you think our preference is, by default, the happiness mind-state, and our terminal values may or may not be the most efficient personal happiness-increasers. Don't you wonder why a rational human being would choose terminal goals that aren't? But we sometimes do. Remember your honesty in saying:

Regarding my happiness, I think I may be lying to myself though. I think I rationalize that the same logic applies, that if I achieve some huge ambition there'd be a proportional increase in happiness. Because my brain likes to think achieving ambition -> goodness and I care about how much goodness gets achieved. But if I'm to be honest, that probably isn't true.

I have an idea. So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness? Like one human was born with an "altruism mutation" and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios? It's a pleasant thought, anyway.

But honestly, I literally didn't even know what evolution was until several weeks ago though, so I don't really belong bringing up any science at all yet; let me switch back to personal experience and thought experiments.

For example, let's say my preferences are 98% affected by selfishness and maybe 2% by altruism, since I'm very stingy with my time but less so with my money. (Someone who would die for someone else would have different numbers.) Anyway, on the surface I might look more altruistic because there is a LOT of overlap between decisions that are good for others and decisions that make me feel good. Or, you could see the giant overlap and assume I'm 100% selfish. When I donate to effective charities, I do receive benefits like liking myself a bit more, real or perceived respect from the world, a small burst of fuzzy feelings, and a decrease in the (admittedly small) amount of personal guilt I feel about the world's unfairness. But if I had to put a monetary value on the happiness return from a $1000 donation, it would be less than $1000. When I use a preference ratio and prefer other people's happiness, their happiness does make me happy, but there isn't a direct correlation between how happy it makes me and the extent to which I prefer it. So maybe preference ratios can be based mostly on happiness, but are sometimes tainted with a hint of genuine altruism?

Also, what about diminishing marginal returns with donating? Will someone even feel a noticable increase in good feelings/happiness/satisfaction giving 18% rather than 17%? Or could someone who earns 100k purchase equal happiness with just 17k and be free to spend the extra 1k on extra happiness in the form of ski trips or berries or something (unless he was the type to never eat in restaurants)? Edit: nevermind this paragraph, even if it's realistic, it's just scope insensitivity, right?

But similarly, let's say someone gives 12% of her income. Her personal happiness would probably be higher giving 10% to AMF and distributing 2% in person via random acts of kindness than it would giving all 12% to AMF. Maybe, you're thinking that this difference would affect her mind-state, that she wouldn't be able to think of himself as such a rational person if she did that. But who really values their self-image of being a rational opportunity-cost analyzer that highly? I sure don't (well, 99.99% sure anyway).

Sooo could real altruism exist in some people and affect their preference ratios just like personal happiness does, but to a much smaller extent? Look at (1) your quote about your ambition (2) my desire to donate despite my firm belief that the happiness opportunity cost outweighs the happiness benefits (3) people who are willing to die for others and terminate their own happiness (4) people who choose to donate via effective altruism rather than random acts of kindness

Anyway, if there was an altruism mutation somewhere along the way, and altruism could shape our preferences like happiness, it would be a bit easier to understand the seeming discrepancy between preferences and terminal goals, between likes and wants. Here I will throw out a fancy new rationalist term I learned, and you can tell me if I misunderstand it or am wrong to think it might apply here... occam's razor?

Anyway, in case this idea is all silly and confused, and altruism is a socially conditioned emotion, I'll attempt to find its origin. Not from giving to church (it was only fair that the pastors/teachers/missionaries get their salaries and the members help pay for building costs, electricity, etc). I guess there was the whole "we love because He first loved us" idea, which I knew well and regurgitated often, but don't think I ever truly internalized. I consciously knew I'd still care about others just as much without my faith. Growing up, I knew no one who donated to secular charity, or at least no one who talked about it. The only thing I knew that came close to resembling large-scale altruism was when people chose to be pastors and teachers instead of pursuing high-income careers, but if they did it simply to "follow God's will" I'm not sure it still counts as genuinely caring about others more than yourself. On a small-scale, my mom was really altruistic, like willing to give us her entire portion of an especially tasty food, offer us her jacket when she was cold too, etc... and I know she wasn't calculating cost-benefit ratios, haha. So I guess she could have instilled it in me? Or maybe I read some novels with altruistic values? Idk, any other ideas?

I still don't understand what Eliezer would say to someone that said, "Preferences are selfish and Goals are arbitrary".

I'm no Eliezer, but here's what I would say: Preferences are mostly selfish but can be affected by altruism, and goals are somehow based on these preferences. Whether or not you call them arbitrary probably depends on how you feel about free will. We make decisions. Do our internal mental states drive these decisions? Put in the same position 100 times, with the same internal mental state, would someone make the same decision every time, or would it be 50-50? We don't know, but either way, we still feel like we make decisions (well, except when it comes to belief, in my experience anyway) so it doesn't really matter too much.

Comment author: adamzerner 18 April 2015 02:08:05AM *  0 points [-]

The way I'm (operationally) defining Preferences and words like happy/utility, Preferences are by definition what provides us what the most happiness/utility. Consider this thought experiment:

You start off as a blank slate and your memory is wiped. You then are experience some emotion, and you experience this emotion to a certain magnitude. Let's call this "emotion-magnitude A".

You then experience a second emotion-magnitude - emotion-magnitude B. Now that you have experienced two emotion-magnitudes, you could compare them and say which one was more preferable.

You then experience a third emotion magnitude, and insert it into the list [A, B] according to how preferable it was. And you do this for a fourth emotion-magnitude. And a fifth. Until eventually you do it for every possible emotion-magnitude (aka conscious state aka mind-state). You then end up with a list of every possible emotion-magnitude ranked according to desirability. [1...n]. These, are your Preferences.

So the way I'm defining Preferences, it refers to how desirable a certain mind-state is relative to other possible mind-states.

Now think about consequentialism and how stuff leads to certain consequences. Part of the consequences is the mind-state it produces for you.

Say that:

  • Action 1 -> mind-state A
  • Aciton 2 -> mind-state B

Now remember mind-states could be ranked according to how preferable they are, like in the thought experiment. Suppose that mind-state A is preferable to mind-state B.

From this, it seems to me that the following conclusion is unavoidable:

Action 1 is preferable to Action 2.

In other words, Action 1 leads you to a state of mind that you prefer over the state of mind that Action 2 leads you to. I don't see any ways around saying that.

To make it more concrete, let's say that Action 1 is "going on vacation" and Action 2 is "giving to charity".

  • IF going on vacation produces mind-state A.
  • IF giving to charity produces mind-state B.
  • IF mind-state A is preferable to mind-state B.
  • THEN going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to.

I call this "preferable", but in this case words and semantics might just be distracting. As long as you agree that "going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to" when the first three bullet points are true, I don't think we disagree about anything real, and that we might just be using different words for stuff.

Thoughts?


Don't you wonder why a rational human being would choose terminal goals that aren't?.

I do, but mainly from a standpoint in being interested in human psychology. I also wonder from a standpoint of hoping that terminal goals aren't arbitrary and that they have an actual reason for choosing what they choose, but I've never found their reasoning to be convincing, and I've never found their informational social influence to be strong enough evidence for me to think that terminal goals aren't arbitrary.

So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness? Like one human was born with an "altruism mutation" and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios? It's a pleasant thought, anyway.

:))) [big smile] (Because I hope what I'm about to tell you might address a lot of your concerns and make you really happy.)

I'm pleased to tell you that we all have "that altruism mutation". Because of the way evolution works, we evolve to maximize the spread of our genes.

So imagine that there's two Mom's. They each have 5 kids, and they each enter an unfortunate situation where they have to choose between themselves and their kids.

  • Mom 1 is selfish and chooses to save herself. Her kids then die. She goes on to not have any more kids. Therefore, her genes don't get spread at all.
  • Mom 2 is unselfish and chooses to save her kids. She dies, but her genes live on through her kids.

The outcome of this situation is that there are 0 organisms with selfish genes, and 5 with unselfish genes.

And so humans (and all other animals, from what I know) have evolved a very strong instinct to protect their kin. But as we know, preference ratios diminish rapidly from there. We might care about our friends and extended family, and a little less about our extended social group, and not so much about the rest of people (which is why we go out to eat instead of paying for meals for 100s of starving kids).

As far as evolution goes, this also makes sense. A mom that acts altruistically towards her social circle would gain respect, and the tribes respect may lead to them protecting that mom's children, thus increasing the chances they survive and produces offspring themselves. Of course, that altruistic act by the mom may decrease her chances of surviving to produce more offspring and to take care of her current offspring, but it's a trade-off.* On the other hand, acting altruistically towards a random tribe across the world is unlikely to improve her children's chances of surviving and producing offspring, so the mom's that did this have historically been less successful at spreading genes than the mom's that didn't.

*Note: using mathematical models to simulate and test these trade-offs is the hard part of studying evolution. The basic ideas are actually quite simple.

But honestly, I literally didn't even know what evolution was until several weeks ago though

I'm really sorry to hear that. I hope my being sorry isn't offensive in any way. If it is, could you please tell me? I'd like to avoid offending people in the future.

so I don't really belong bringing up any science at all yet;

Not so! Science is all about using what you do know to make hypothesis about the world and to look for observable evidence to test them. And that seems to be exactly what you were doing :)

Your hypotheses and thought experiments are really impressive. I'm beginning to suspect that you do indeed have training and are denying this in order to make a status play. [joking]

Like one human was born with an "altruism mutation" and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios?

I'd just like to offer a correction here for your knowledge. Mutations spread almost entirely because they a) increase the chances that you produce offspring or b) increase the chances that the offspring survive (and presumably produce offspring themselves).

You seem to be saying that the mutation would spread because the organism remains alive. Think about it - if an organism has a mutation that increases the chances that it remain alive but that doesn't increase the chances of having viable offspring, then that mutation would only remain in the gene pool until he died. And so of all the bajillions of our ancestors, only the ones still alive are candidates for the type of evolution you describe (mutations that only increase your chance of survival). Note that evolution is just the process of how genes spread.

Note: I've since realized that you may know this already, but figured I'd keep it anyway.


I got a "comment too long error" haha

Comment author: [deleted] 18 April 2015 05:03:52AM *  0 points [-]

Okay, I guess I should have known some terminology correction was coming. If you want to define "happiness" as the preferred mind-state, no worries. I'll just say the preferred mind-state of happiness is the harmony of our innate desire for pleasure and our innate desire for altruism, two desires that often overlap but occasionally compete. Do you agree that altruism deserves exactly the same sort of special recognition as an ultimate motivator that pleasure does? If so, your guess that we might not have disagreed about anything real was right.

IF going on vacation produces mind-state A. IF giving to charity produces mind-state B. IF mind-state A is preferable to mind-state B. THEN going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to.

Okay...most people want some vacation, but not full-time vacation, even though full-time vacation would bring us a LOT of pleasure. Doing good for the world is not as efficient at maximizing personal pleasure as going on vacation is. An individual must strike a balance between his desire for pleasure and his desire to be altruistic to achieve Harmonious Happiness (Look, I made up a term with capital letters! LW is rubbing off on me!)

I'm pleased to tell you that we all have "that altruism mutation". Because of the way evolution works, we evolve to maximize the spread of our genes.

Yay!!! I didn't think of a mother sacrificing herself for her kids like that, but I did think the most selfish, pleasure-driven individuals would quite probably be the most likely to end up in prison so their genes die out and less probably, but still possibly, they could also be the least likely to find spouses and have kids.

I'm really sorry to hear that. I hope my being sorry isn't offensive in any way. If it is, could you please tell me? I'd like to avoid offending people in the future.

I almost never get offended, much less about this. I appreciate the sympathy! But others could find it offensive in that they'd find it arrogant. My thoughts on arrogance are a little unconventional. Most people think it's arrogant to consider one person more gifted than others or one idea better than others. Some people really are more gifted and have far more positive qualities than others. Some ideas really are better. If you happen to be one of the more gifted people or understand one of the better ideas (evolution, in this case), and you recognize yourself as more gifted or recognize an idea as better, that's not arrogance. Not yet. That's just an honest perspective on value. Once you start to look down on people for being less gifted than you are or having worse ideas, that's when you cross the line and become arrogant. If you are more gifted, or have more accurate ideas, you can happily thank the universe you weren't born in someone else's shoes, while doing your best to imagine what life would have been like if you were. You can try to help others use their own gifts to the best of their potential. You can try to share your ideas in a way that others will understand. Just don't look down on people for not having certain abilities or believing the correct ideas because you really can't understand what it's like to be them :) But yeah, if you don't want to offend people, it's dangerous to express pity. Some people will look at your "feeling sorry" for those who don't share your intelligence/life opportunities/correct ideas and call you arrogant for it, but I think they're wrong to do so. There's a difference between feeling sorry for people and looking down on them. For example, I am a little offended when one Christian friend and her dad who was my high school Calculus teacher look down on me. Most of my other friends just feel sorry for me, and I would be more offended if they didn't, because feeling sorry at least shows they care.

Your hypotheses and thought experiments are really impressive. I'm beginning to suspect that you do indeed have training and are denying this in order to make a status play.

I'm flattered!! But I must confess the one thought experiment that was actually super good, the one at the end about free will, wasn't my idea. It was a paraphrase of this guy's idea and I had used it in the past to explain my deconversion to my friends. The other ideas were truly original, though :) (Not to say no one else has ever had them! Sometimes I feel like my life is a series of being very pleasantly surprised to find that other people beat me to all my ideas, like how I felt when I first read Famine, Affluence and Morality ten years after trying to convince my family it was wrong to eat in restaurants)

I'd just like to offer a correction here for your knowledge. Mutations spread almost entirely because they a) increase the chances that you produce offspring or b) increase the chances that the offspring survive (and presumably produce offspring themselves).

Hey, this sounds like what I was just reading this week in the rationality book about Adaptation-Executers, not Fitness-Maximizers! I think I get this, and maybe I didn't write very clearly (or enough) here, but maybe I still don't fully understand. But if someone is nice to have around, wouldn't he have fewer enemies and be less likely to die than the selfish guys? So he lives to have kids, and the same goes for them? Idk.

Note: I just read your note and now have accordingly decreased the probability that I had said something way off-base :)

Comment author: adamzerner 18 April 2015 05:07:54PM 0 points [-]

I'll just say the preferred mind-state of happiness is the harmony of our innate desire for pleasure and our innate desire for altruism, two desires that often overlap but occasionally compete. Do you agree that altruism deserves exactly the same sort of special recognition as an ultimate motivator that pleasure does? If so, your guess that we might not have disagreed about anything real was right.

I agree that in most cases (sociopaths are an exception) pleasure and doing good for others are both things that determine how happy something makes you. And so in that sense, it doesn't seem that we disagree about anything real.

But you use romantic sounding wording. Ex. "special recognition as an ultimate motivator".

"ultimate motivator"

So they way motivation works is that it's "originally determined" by our genes, and "adjusted/added to" by our experiences. So I agree that altruism is one of our "original/natural motivators". But I wouldn't say that it's an ultimate motivator, because to me that sounds like it implies that there's something final and/or superseding about altruism as a motivator, and I don't think that's true.

"special recognition"

I'm going to say my original thought, and then I'm going to say how I have since decided that it's partially wrong of me.

My original thought is that "there's no such thing as a special motivator". We could be conditioned to want anything. Ie. to be motivated to do anything. The way I see it, the inputs are our genes and our experiences, and the output is the resulting motivation, and I don't see how one output could be more special than another.

But that's just me failing to use the word special as is customary by a good amount of people. One use of the word special would mean that there's something inherently different about it, and it's that use that I argue against above. But another way people use it is just to mean that it's beautiful or something. Ie. even though altruism is an output like any other motivation, humans find that to be beautiful, and I think it's sensible to use the word special to describe that.

This all may sound a lot like nitpicking, and it sort of is, but not really. I actually think there's a decent chance that clarifying what I mean by these words will bring us a lot closer to agreement.

Okay...most people want some vacation, but not full-time vacation, even though full-time vacation would bring us a LOT of pleasure. Doing good for the world is not as efficient at maximizing personal pleasure as going on vacation is.

True, but that wasn't the point I was making. I was just using that as an example. Admittedly, one that isn't always true.

Yay!!!

I'm curious - was this earth shattering or just pretty cool? I got the impression that you thought that humans are completely selfish by nature.

So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness?

And that this makes you sad and that you'd be happier if people did indeed have some sort of altruism "built in".

I didn't think of a mother sacrificing herself for her kids like that, but I did think the most selfish, pleasure-driven individuals would quite probably be the most likely to end up in prison so their genes die out and less probably, but still possibly, they could also be the least likely to find spouses and have kids.

I think you may be misunderstanding something about how evolution works. I see that you now understand that we evolve to be "altruistic to our genes", but it's a common and understandable error to instinctively think about society as we know it. In actuality, we've been evolving very slowly over millions of years. Prisons have only existed for, idk, a couple hundred? (I realize you might understand this, but I'm commenting just in case you didn't)

My thoughts on arrogance are a little unconventional.

Not here they're not :) And I think that description was quite eloquent.

I used to be bullied and would be sad/embarrassed if people made fun of me. But at some point I got into a fight, ended it, and had a complete 180 shift of how I think about this. Since then, I've sort of decided that it doesn't make sense at all to be "offended" by anything anyone says about you. What does that even mean? That your feelings are hurt? The way I see it:

a) Someone points out something that is both fixable and wrong with you, in which case you should thank them and change it. And if your feelings get hurt along the way, that's just a cost you have to incur along the path of seeking a more important end (self improvement).

b) Someone points out something about you that is not fixable, or not wrong with you. In that case they're just stupid (or maybe just wrong).

In reality, I'm exaggerating a bit because I understand that it's not reasonable to expect humans to react like this all the time.

It was a paraphrase of this guy's idea and I had used it in the past to explain my deconversion to my friends.

Haha, I see. Well now I'm less impressed by your intellect but more impressed with your honesty!

Sometimes I feel like my life is a series of being very pleasantly surprised to find that other people beat me to all my ideas

Yea, me too. But isn't it really great at the same time though! Like when I first read the Sequences, it just articulated so many things that I thought that I couldn't express. And it also introduced so many new things that I swear I would have arrived at. (And also introduced a bunch of new things that I don't think I would have arrived at)

But if someone is nice to have around, wouldn't he have fewer enemies and be less likely to die than the selfish guys? So he lives to have kids, and the same goes for them? Idk.

Yeah, definitely!

Comment author: [deleted] 18 April 2015 05:49:18PM 0 points [-]

But I wouldn't say that it's an ultimate motivator, because to me that sounds like it implies that there's something final and/or superseding about altruism as a motivator, and I don't think that's true.

Yes, that's exactly what I meant to imply! Finally, I used the right words. Why don't you think it's true?

I don't see how one output could be more special than another.

I did just mean "inherently different" so we're clear here. I think what makes selfishness and goodness/altruism inherently different is that other psychological motivators, if you follow them back far enough, will lead people to act in a way that they either think will make them happy or that they think will make the world a happier place.

I'm curious - was this earth shattering or just pretty cool? I got the impression that you thought that humans are completely selfish by nature.

Well, the idea of being completely selfish by nature goes so completely against my intuition, I didn't really suspect it (but I wouldn't have ruled it out entirely). The "Yay!!" was about there being evidence/logic to support my intuition being true.

I think you may be misunderstanding something about how evolution works. I see that you now understand that we evolve to be "altruistic to our genes", but it's a common and understandable error to instinctively think about society as we know it. In actuality, we've been evolving very slowly over millions of years. Prisons have only existed for, idk, a couple hundred? (I realize you might understand this, but I'm commenting just in case you didn't)

Prisons didn't exist, but enemies did, and totally selfish people probably have more enemies... so yeah, I understand :)

I've sort of decided that it doesn't make sense at all to be "offended" by anything anyone says about you.

No, you're right! Whenever someone says something and adds "no offense" I remark that there must be something wrong with me, because I never take offense at anything. I've used your exact explanation to talk about criticism. I would rather hear it than not, because there's a chance someone recognizes a bad tendency/belief that I haven't already recognized in myself. I always ask for negative feedback from people, there's no downside to it (unless you already suffer from depression, or something).

In real life, the only time I feel offended/mildly annoyed by what someone flat-out claims I'm lying, like when my old teacher said he didn't believe me that I spent years earnestly praying for a stronger faith. But even as I was mildly annoyed, I understood his perspective completely because he either had to disbelieve me or disbelieve his entire understanding of the Bible and a God who answers prayer.

Yea, me too. But isn't it really great at the same time though! Like when I first read the Sequences, it just articulated so many things that I thought that I couldn't express. And it also introduced so many new things that I swear I would have arrived at. (And also introduced a bunch of new things that I don't think I would have arrived at)

Yeah, ditto all the way! It's entirely great :) I feel off the hook to go freely enjoy my life knowing it's extremely probable that somewhere else, people like you, people who are smarter than I am, will have the ambition to think through all the good ideas and bring them to fruition.

Comment author: adamzerner 18 April 2015 06:02:12PM *  0 points [-]

I think what makes selfishness and goodness/altruism inherently different is that other psychological motivators, if you follow them back far enough, will lead people to act in a way that they either think will make them happy or that they think will make the world a happier place.

I think we've arrived at a core point here.

See my other comment:

I guess my whole idea is that goodness is kind of special. Most people seem born with it, to one extent or another. I think happiness and goodness are the two ultimate motivators. I even think they're the only two ultimate motivators. Or at least I can't think of any other supposed motivation that couldn't be traced back to one or both of these.

In a way, I think this is true. Actually, I should give more credit to this idea - yeah, it's true in an important way.

My quibble is that motivation is usually not rational. If it was, then I think you'd be right. But the way our brains produce motivation isn't rational. Sometimes we are motivated to do something... "just because". Ie. even if our brain knows that it won't lead to happiness or goodness, it could still produce motivation.

And so in a very real sense, motivation itself is often something that can't really be traced back. But I try really hard to respond to what people's core points are, and what they probably meant. I'm not precisely sure what your core point is, but I sense that I agree with it. That's the strongest statement I could make.

Unfortunately, I think my scientific background is actually harming me right now. We're talking about a lot of things that have very precise scientific meanings, and in some cases I think you're deviating from them a bit. Which really isn't too big a deal because I should be able to infer what you mean and progress the conversation, but I think I'm doing a pretty mediocre job of that. When I reflect, it difficult to deviate from the definitions I'm familiar with, which is sort of bad "conversational manners", because the only point of words in a conversation is to communicate ideas, and it'd probably be more efficient if I were better able to use other definitions.

Back to you:

Well, the idea of being completely selfish by nature goes so completely against my intuition, I didn't really suspect it (but I wouldn't have ruled it out entirely). The "Yay!!" was about there being evidence/logic to support my intuition being true.

Oh, I see.

Comment author: adamzerner 18 April 2015 02:06:27AM *  0 points [-]

So maybe preference ratios can be based mostly on happiness, but are sometimes tainted with a hint of genuine altruism?

The way I'm defining preference ratios:

Preference ratio for person X = how much you care about yourself / how much you care about person X

Or, more formally, how many units of utility person X would have to get before you'd be willing to sacrificing one unit of your own utility for him/her.

So what does altruism mean? Does it mean "I don't need to gain any happiness in order for me to want to help you, but I don't know if I'd help you if it caused me unhappiness."? Or does it mean "I want to help you regardless of how it impacts my happiness. I'd go to hell if it meant you got one extra dollar."

[When I was studying for some vocab test in middle school my cards were in alphabetical order at one point and I remember repeating a thousand times - "altruism: selfless concern for others. altruism: selfless concern for others. altruism: selfless concern for others...". That definition would imply the latter.]

Let's take the former definition. In that case, you'd want person X to get one unit of utility even if you get nothing in return, so your preference ratio would be 0. But this doesn't necessarily work in reverse. Ie. in order to save person X from losing one unit of utility, you probably wouldn't sacrifice a bajillion units of your own utility. I very well might be confusing myself with the math here.

Note: I've been trying to think about this but my approach is too simplistic and I've been countering it, but I'm having trouble articulating it. If you really want me to I could try, otherwise I don't think it's worth it. Sometimes I find math to be really obvious and useful, and sometimes I find it to be the exact opposite.

Also, what about diminishing marginal returns with donating?

This depends on the person, but I think that everyone experiences it to some extent.

Will someone even feel a noticable increase in good feelings/happiness/satisfaction giving 18% rather than 17%? Or could someone who earns 100k purchase equal happiness with just 17k and be free to spend the extra 1k on extra happiness in the form of ski trips or berries or something (unless he was the type to never eat in restaurants)? Edit: nevermind this paragraph, even if it's realistic, it's just scope insensitivity, right?

If the person is trying to maximize happiness, the question is just "how much happiness would a marginal 1k donation bring" vs. "how much happiness would a 1k vacation bring". The answers to these questions depend on the person.

Sorry, I'm not sure what you're getting at here. The person might be scope insensitive to how much impact the 1k could have if he donated it.

But similarly, let's say someone gives 12% of her income. Her personal happiness would probably be higher giving 10% to AMF and distributing 2% in person via random acts of kindness than it would giving all 12% to AMF.

Yes, the optimal donation strategy for maximizing your own happiness is different from the one that maximizes impact :)

Sooo could real altruism exist in some people and affect their preference ratios just like personal happiness does, but to a much smaller extent? Look at (1) your quote about your ambition (2) my desire to donate despite my firm belief that the happiness opportunity cost outweighs the happiness benefits (3) people who are willing to die for others and terminate their own happiness (4) people who choose to donate via effective altruism rather than random acts of kindness

2, 3 and 4 are examples of people not trying to maximize their happiness.

1 is me sometimes knowingly following an impulse my brain produces even when I know it doesn't maximize my happiness. Sadly, this happens all the time. For example, I ate Chinese food today, and I don't think that doing so would maximize my long-term happiness.

In the case of my ambitions, my brain produces impulses/motivations stemming from things including:

  • Wanting to do good.
  • Wanting to prove to myself I could do it.
  • Wanting to prove to others I could do it.
  • Social status.

Brains don't produce impulses in perfect, or even good alignment with what it expects will maximize utility. I find the decision to eat fast food as an intuitive example of this. But I don't see how this changes anything about Preferences or Goals.

Anyway, if there was an altruism mutation somewhere along the way, and altruism could shape our preferences like happiness, it would be a bit easier to understand the seeming discrepancy between preferences and terminal goals, between likes and wants. Here I will throw out a fancy new rationalist term I learned, and you can tell me if I misunderstand it or am wrong to think it might apply here... occam's razor?

I'm sorry, I'm trying to understand what you're saying but I think I'm failing. I think the problem is that I'm defining words differently than you. I'm trying to figure out how you're defining them, but I'm not sure. Anyway, I think that if we clarify our definitions, we'd be able to make some good progress.

altruism could shape our preferences like happiness

The way I'm thinking about it... think back to my operational definition of preferences in the first comment where I talk about how an action leads to a mind-state. What action leads to what mind-state depends on the person. An altruistic action for you might lead to a happy mind-state, and that same action might lead me to a neutral mind-state. So in that sense altruism definitely shapes our preferences.

I'm not sure if you're implying this, but I don't see how this changes the fact that you could choose to strive for any goal you want. That you could only say that a means is good at leading to an end. That you can't say that and end is good.

Ie. I could chose the goal of killing people, and you can't say that it's a bad goal. You could only say that it's bad at leading to a happy society. Or that it's bad at making me happy.

Here I will throw out a fancy new rationalist term I learned, and you can tell me if I misunderstand it or am wrong to think it might apply here... occam's razor?

That's a term that I don't think I have a proper understanding of. There was a point when I realized that it just means that A & B is always less likely than A, unless B = 1. Like let's say that the probability of A is .75. Even if B is .999999, P(A & B) < P(A). And so in that sense, simpler = better.

But people use it in ways that I don't really understand. Ie. sometimes I don't get what they mean by simpler. I don't see that the term applies here though.

Anyway, in case this idea is all silly and confused, and altruism is a socially conditioned emotion

I think it'd be helpful if you defined specifically what you mean by altruism. I mean, you don't have to be all formal or anything, but more specific would be useful.

As far as socially conditioned emotions goes, our emotions are socially conditioned to be happy in response to altruistic things and sad in response to anti-altruistic things. I wouldn't say that that makes altruism itself a socially conditioned emotion.

Do our internal mental states drive these decisions? Put in the same position 100 times, with the same internal mental state, would someone make the same decision every time, or would it be 50-50?

Wow, that's a great way to put it! You definitely have the head of a scientist :)

We don't know, but either way, we still feel like we make decisions (well, except when it comes to belief, in my experience anyway) so it doesn't really matter too much.

Yeah, I pretty much feel that way too.

Comment author: [deleted] 18 April 2015 03:48:23PM *  0 points [-]

Yeah, this has gotten a little too tangled up in definitions. Let's try again, but from the same starting point.

Happiness=preferred mind-state (similar, potentially interchangeable terms: satisfaction, pleasure) Goodness=what leads to a happier outcome for others (similar, potentially interchangeable terms: morality, altruism)

I guess my whole idea is that goodness is kind of special. Most people seem born with it, to one extent or another. I think happiness and goodness are the two ultimate motivators. I even think they're the only two ultimate motivators. Or at least I can't think of any other supposed motivation that couldn't be traced back to one or both of these.

Pursuing a virtue like loyalty will usually lead to happiness and goodness. But is it really the ultimate motivator, or was there another reason behind this choice, i.e. it makes the virtue ethicist happy and she believes it benefits society? I'm guessing that in certain situations, the author might even abandon the loyalty virtue if it conflicted with the underlying motivations of happiness and goodness. Thoughts?

Edit: I guess I'm realizing the way you defined preference doesn't work for me either, and I should have said so in my other comment. I would say prefer simply means "tend to choose." You can prefer something that doesn't lead to the happiest mind-state, like a sacrificial death, or here's an imaginary example:

You have to choose: Either you catch a minor cold, or a mother and child you will never meet will get into a car accident. The mother will have serious injuries, and her child will die. Your memory of having chosen will be erased immediately after you choose regardless of your choice, so neither guilt nor happiness will result. You'll either suddenly catch a cold, or not.

Not only is choosing to catch a cold an inefficient happiness-maximizer like donating to effective charities, this time it will actually have a negative effect on your happiness mind-state. Can you still prefer that you catch a cold? According to what seems to me like common real-world usage of "prefer" you can. You are not acting in some arbitrary, irrational, inexplicable way in doing so. You can acknowledge you're motivated by goodness here, rather than happiness.

Comment author: adamzerner 18 April 2015 05:45:37PM *  0 points [-]

I guess my whole idea is that goodness is kind of special. Most people seem born with it, to one extent or another. I think happiness and goodness are the two ultimate motivators. I even think they're the only two ultimate motivators. Or at least I can't think of any other supposed motivation that couldn't be traced back to one or both of these.

In a way, I think this is true. Actually, I should give more credit to this idea - yeah, it's true in an important way.

My quibble is that motivation is usually not rational. If it was, then I think you'd be right. But the way our brains produce motivation isn't rational. Sometimes we are motivated to do something... "just because". Ie. even if our brain knows that it won't lead to happiness or goodness, it could still produce motivation.

And so in a very real sense, motivation itself is often something that can't really be traced back. But I try really hard to respond to what people's core points are, and what they probably meant. I'm not precisely sure what your core point is, but I sense that I agree with it. That's the strongest statement I could make.

Unfortunately, I think my scientific background is actually harming me right now. We're talking about a lot of things that have very precise scientific meanings, and in some cases I think you're deviating from them a bit. Which really isn't too big a deal because I should be able to infer what you mean and progress the conversation, but I think I'm doing a pretty mediocre job of that. When I reflect, I find it difficult to deviate from the definitions I'm familiar with, which is sort of bad "conversational manners", because the only point of words in a conversation is to communicate ideas, and it'd probably be more efficient if I were better able to use other definitions.

Pursuing a virtue like loyalty will usually lead to happiness and goodness. But is it really the ultimate motivator, or was there another reason behind this choice

Haha, you seem to be confused about virtue ethics in a good way :)

A true virtue ethicist would completely and fully believe that their virtue is inherently desirable, independent of anything and everything else. So a true virtue ethicist who values the virtue of loyalty wouldn't care whether the loyalty lead to happiness or goodness.

Now, I think that consequentialism is a more sensible position, and I think you do too. And in the real world, virtue ethicists often have virtues that include happiness and goodness. And if they run into a conflict between say the virtue of goodness and the one of loyalty, well I don't know how they'd resolve it, but I think they'd give some weight to each, and so in practice I don't think virtue ethicists end up acting too crazy, because they're stabilized by their virtues of goodness and happiness. On the other hand, a virtue ethicist without the virtue of goodness... that could get scary.

I guess I'm realizing the way you defined preference doesn't work for me either

I hadn't thought about it before, but now that I do I think you're right. I'm not using the word "prefer" to mean what it really means. In my thought experiment I started off using it properly in saying that one mind-state is preferable to another.

But the error I made is in defining ACTIONS to be preferable because the resulting MIND-STATES are preferable. THAT is completely inconsistent with the way it's commonly used. In the way it's commonly used, an action is preferable... if you prefer it.

I'm feeling embarrassed that I didn't realize this immediately, but am glad to have realized it now because it allows me to make progress. Progress feels so good! So...

THANK YOU FOR POINTING THIS OUT!

According to what seems to me like common real-world usage of "prefer" you can.

Absolutely. But I think that I was wrong in an even more general sense than that.

So I think you understood what I was getting at with the thought experiment though - do you have any ideas about what words I should substitute in that would make more sense?

(I think that the fact that this is the slightest bit difficult is a huge failure of the english language. Language is meant to allow us to communicate. These are important concepts, and our language isn't giving us a very good way to communicate them. I actually think this is a really big problem. The linguistic-relativity hypothesis basically says that our language restricts our ability to think about the world, and I think (and it's pretty widely believed) that it's true to some extent (the extent itself is what's debated).)

Comment author: [deleted] 18 April 2015 06:40:59PM *  0 points [-]

In a way, I think this is true. Actually, I should give more credit to this idea - yeah, it's true in an important way.

Yay, agreement :)

My quibble is that motivation is usually not rational. If it was, then I think you'd be right. But the way our brains produce motivation isn't rational. Sometimes we are motivated to do something... "just because". Ie. even if our brain knows that it won't lead to happiness or goodness, it could still produce motivation.

Great point. I actually had a similar thought and added the qualifier "psychological" in my previous comment. Maybe "rational" would be better. Maybe there are still physical motivators (addiction, inertia, etc?) but this describes the mental motivators? Does this align any better with your scientific understanding of terminology? And don't feel bad about it, I'm sure the benefits of studying science outweigh the cost of the occasional decrease in conversation efficiency :)

A true virtue ethicist would completely and fully believe that their virtue is inherently desirably, independent of anything and everything else. So a true virtue ethicist who values the virtue of loyalty wouldn't care whether the loyalty lead to happiness or goodness.

Then I think very, very few virtue ethicists actually exist, and virtue ethicism is so abnormal it could almost qualify as a psychological disorder. Like the common ethics dilemma of exposing hidden Jews. If someone's virtue was "honesty" they would have to. (In the philosophy class I took, we resolved this dilemma by redefining "truth" and capitalizing; e.g. Timmy's father is a drunk. Someone asks Timmy if his father is a drunk. Timmy says no. Timmy told the Truth.) We whizzed through that boring old "correspondence theory" in ten seconds flat. I will accept any further sympathy you wish to express. Anyway, I think that any virtue besides happiness and goodness will have some loophole where 99% of people will abandon it if they run into a conflict between their chosen virtue and the deeper psychological motivations of happiness and goodness.

Edit: A person with extremely low concern for goodness is a sociopath. The amount of concern someone has for goodness as a virtue vs. amount of concern for personal happiness determines how altruistic she is, and I will tentatively call this a psychological motivation ratio, kind of like a preference ratio. And some canceling occurs in this ratio because of overlap.

But the error I made is in defining ACTIONS to be preferable because the resulting MIND-STATES are preferable. THAT is completely inconsistent with the way it's commonly used. In the way it's commonly used, an action is preferable... if you prefer it.

Yes! I wish I could have articulated it that clearly for you myself.

Instead of saying we "prefer" an optimal mind-state... you could say we "like" it the most, but that might conflict with your scientific definitions for likes and wants. But here's an idea, feel free to critique it...

"Likes" are things that actually produce the happiest, optimal mind-states within us

"Wants" are things we prefer, things we tend to choose when influenced by psychological motivators (what we think will make us happy, what we think will make the world happy)

Some things, like smoking, we neither like (or maybe some people do, idk) nor want, but we still do because the physical motivators overpower the psychological motivators (i.e. we have low willpower)

I think that the fact that this is the slightest bit difficult is a huge failure of the english language.

Absolutely!! I'll check out that link.

Comment author: adamzerner 18 April 2015 08:00:53PM *  0 points [-]

Great point. I actually had a similar thought and added the qualifier "psychological" in my previous comment. Maybe "rational" would be better. Maybe there are still physical motivators (addiction, inertia, etc?) but this describes the mental motivators? Does this align any better with your scientific understanding of terminology?

Hmmm, so the question I'm thinking about is, "what does it mean to say that a motivation is traced back to something". It seems to me that the answer to that involves terminal and instrumental values. Like if a person is motivated to do something, but is only motivated to do it to the extent that it leads to the persons terminal value, then it seems that you could say that this motivation can be traced back to that terminal value.

And so now I'm trying to evaluate the claim that "motivations can always be traced back to happiness and goodness". This seems to be conditional on happiness and goodness being terminal goals for that person. But people could, and often do choose whatever terminal goals they want. For example, people have terminal goals like "self improvement" and "truth" and "be man" and "success". And so, I think that for a person with a terminal goal other than happiness and goodness, they will have motivations that can't be traced back to happiness or goodness.

But I think that it's often the case that motivations can be traced back to happiness and goodness. Hopefully that means something.

(In the philosophy class I took, we resolved this dilemma by redefining "truth" and capitalizing; e.g. Timmy's father is a drunk. Someone asks Timmy if his father is a drunk. Timmy says no. Timmy told the Truth.) We whizzed through that boring old "correspondence theory" in ten seconds flat.

Wait... so the Timmy example was used to argue against correspondence theory? Ouch.

Anyway, I think that any virtue besides happiness and goodness will have some loophole where 99% of people will abandon it if they run into a conflict between their chosen virtue and the deeper psychological motivations of happiness and goodness.

Perhaps. Truth might be an exception for some people. Ex. some people may choose to pursue the truth even if it's guaranteed to lead to decreases in happiness and goodness. And success might also be an exception for some people. They also may choose to pursue success even if it's guaranteed to lead to decreases in happiness and goodness. But this becomes a question of some sort of social science rather than of philosophy.

The amount of concern someone has for goodness as a virtue vs. amount of concern for personal happiness determines how altruistic she is, and I will tentatively call this a psychological motivation ratio, kind of like a preference ratio.

I like the concept! I propose that you call it an altruism ratio as opposed to a psychological motivation ratio because I think the former is less likely to confuse people.

Instead of saying we "prefer" an optimal mind-state... you could say we "like" it the most, but that might conflict with your scientific definitions for likes and wants.

Eh, I think that this would conflict with the way people use the word "like" in a similar way to the problems I ran into with "preference". For example, it makes sense to say that you like mind-state A more than mind-state B. But I'm not sure that it makes sense to say that you necessarily like action A more than action B, given the way people use the term "like". Damn language! :)