You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

els comments on On not getting a job as an option - Less Wrong Discussion

36 Post author: diegocaleiro 11 March 2014 02:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread. Show more comments above.

Comment author: adamzerner 09 April 2015 02:08:46AM *  1 point [-]

When it comes to the option of pursuing a life goal, everything gets really fuzzy.

Very understandable. It makes sense that things that are more clear have a bigger influence on your motivation than things that are less clear.

I think it's that fuzziness that's keeping me from seriously considering giving up my fun-filled life to do something more ambitious.

I think it's a really good sign that you a) know this and b) acknowledge it. Given that it's such an important topic, it seems worth putting proportional thought into it though. And it seems like you are trying to do that. Check out Ugh fields if you haven't already. It's been one of the most practical articles I've found here.

and think my life is cool

Count me among them! In some not so far away alternate universe, I'm doing the same things you are. Which is why your situation is interesting to me.

I think it's a pretty hard question that most people don't seem to actually take seriously. For the record, my impression is that most people here aren't really too ambitious. Two big reasons seem to be a) "it's too difficult/unlikely that I succeed" and b) akrasia. Perhaps you'd like to investigate this further and more formally. If you do, please let me know what you find. If you don't, I probably will, but it'd be at the end of my current to-do list.

But anyway, you seem to be trying to take the question pretty seriously, and seem to be a pretty self-aware and reasonable person. I shall try to say something useful.

  • Question: What are your terminal goals. The ends that you seek. Obviously an incredibly difficult question. It may be possible to proceed without a perfect answer to it though if you have a rough idea of what your preference ratios are.

  • Question: How strong an impulse do you feel to do something ambitious? How manageable is this impulse? How do you expect this impulse to change over time? Personally, I have an incredibly strong impulse to do some ambitious things, and I've taken it into account that I expect that this impulse would remain strong and would make my life unpleasant if I ignored it.

  • Question: How happy would you be if you weren't to pursue an ambitious life? Seems like you have done a pretty good job so far. It seems that you'll continue to be pretty happy, although you seem to be in your early 20s and I'm not sure how much you could extrapolate from your current experiences.

  • Question: How big a positive impact would you have on the world if you pursued a non-ambitious path?

  • Question: What is the probability that you succeed in your ambitious endeavors? My thoughts about this are unconventional. I think that a truly smart and dedicated person would have very very good chances of success. I see a lot of big problems as puzzles that can be solved.

Very very rough calculations on startup success:

Say that I get 10 tries to start a startup in the next 20 years (I know that some take longer than 2 years to fail, but 2 years is the average, and it often takes shorter than 2 years to fail). At a 50% chance of success, that's a >99.9% chance that at least one of them succeeds (1-.5^10). I know 50% might seem high, but I think that my rationality skills, domain knowledge (eventually) and experience (eventually) give me an edge. Even at a 10% chance of success, I have about a 65% (1-.9^10) chance at succeeding in one of those 10 tries, and I think that 10% chance of success is very conservative. (from here)

Some thoughts on where I see opportunity if I had the resources.

  • Question: How altruistic are you really? How much do you really care about the billions of people who you never have and probably never will meet? What about the bajillions of people who haven't been born yet? To what extent are you willing to make sacrifices for these people? (I know this is implicitly addressed in some other bullet points, but I thought it'd be worth mentioning explicitly) EDIT: See here for thoughts on EV and ambitiousness.

  • Question: What are the selfish reasons to be ambitious? How happy would you be if you succeeded in your ambitions? (note Ambition can Be Poison and it's easy to never be satisfied, so I don't think it'd be as happy as one would think) Could you possibly contribute/build a better world for yourself?

    • Some thoughts of mine. I don't think I'm nearly as well read as most people here, am lacking information and thus am of limited confidence, but I plan on reading up in due time. Anyway, it seems to be that we live in a truly special time. Kurzweil's LOAR makes sense to me (the gist of it anyway). Compared to previous generations, we have an unprecedented opportunity to do big things. AI, cryonics, anti-aging, the internet, joint-consumption economies, etc. I really do think there's a reasonable chance that you could contribute/build a better world for yourself.

    • I know that ambition can be poison. I know there's a reasonable chance I do some "slipping down the slope", but long term I think I'll be able to get over this (to a reasonable extent). Nevertheless, I take it into account in my calculations. However, I think that it'd feel really really really good to have had some big positive impact on the world. It seems like something that really would boost that happiness setpoint up a bit.

    • Finally, the money/fame/power that result from successfully achieving any ambitions definitely have their value, although I think it's nothing compared to the personal satisfaction.

  • Confessions: I think the clarity of these thoughts are about 40% more clear than the previous best analysis I had done. I personally don't think I've thought nearly hard enough about these things given there importance, although part of the reason is intentional - I have a tendency to overthink things which is stressful and I've sort of reached a point of diminishing returns... idk. And I unfortunately do fear that this analysis suffers a bit from confirmation bias (ie. biased in favor of ambition). So please take all of this into account.

So personally, with a title like "Morality Doesn't Exist" would you be willing to describe your views as moral relativism?

I'm not particularly well read in philosophy. Probably way moreso than the average person, but below average for someone here. I don't know what moral relativism is, but I'll look it up...

It seems to be saying that goals are arbitrary, and if so, then yes - I do think my views could be described that way. Thanks for introducing me to the official viewpoint. You seem to know a bit about it - do you know anything about it that seems inconsistent with what I appear to believe?

After brief reading, it seems that I may not agree with what seems to be the less strict interpretations of moral relativism. It seems that people use it as an excuse to say "don't judge others for what they believe". It seems to me that a lot of viewpoints really do have a terminal goal of something along the lines of utilitarian, but these other viewpoints try to invent rules of thumb that promote this end, but they don't admit that the end is actually what they're after.

The title of the website, Less Wrong, almost implies an objective morality

I wouldn't say so. The way I think of it is "less bad at achieving your ends". If you read HPMOR I would say "Quirrel is (at times) quite rational, even if his goals are sometimes selfish".

it seems like many LWers shy away from the term relativism. Although I don't like it either, I still don't totally understand why they do, assuming they're rational enough to have a reason other than discomfort with the idea.

My impression is that the community does have some "soft spots", and not wanting to believe in moral relativism sort of seems like it's one of them (based on what I remember when I read through the metaethics stuff. Not wanting to seem naive appears to be another "soft spot" of the community to me.

And I think that "anti-religion" is a bias here too. I had gotten slammed for asking about the possibility of an afterlife here. Regardless of whether I was right or wrong, I don't think I was uncivil or anything and I think it's a topic worth discussion, at the very least (from their perspective) to help me better understand it. But I sense that it hit a soft spot, hence the downvoting and mild incivility. And I've seen similar things happen elsewhere here also. I figure you should know this given your background. For the record, I'd probably call myself a confused agnostic. I definitely don't believe in the teachings of religion or god in the traditional sense, but I don't pretend to understand the true workings of consciousness or the universe and I remain open to possibilities that atheist wouldn't. And on some level, I think Louis CK makes a good point (plus it's funny).

Anyway, my point here is that humans are quite flawed. I love LW but people here are far from perfect. And so am I. And even EY is far from perfect (although I think he's astonishingly smart).

Depending on how far up the slippery slope of ambition you want to climb, I just heard my old boss in Guatemala is looking for a new SAT teacher for the next year, starting this summer, if you happen to be interested. But beware, the slope is slippery in both directions!

No thanks (see above). But I appreciate the thought :)

Comment author: [deleted] 09 April 2015 08:40:48PM 2 points [-]

Actively look out for the flinch, preferably when you are in a motivationally "high" state. Better still, do this when you are both motivationally high, not under time pressure, and when you are undertaking an overview of your life.

Thanks for the link. You're right about this being an "ugh field" for me, something I usually flinch from even thinking about. I think my doubts about Christianity used to be an "ugh field" too, but I feel a lot better for having confronted them.

Two big reasons seem to be a) "it's too difficult/unlikely that I succeed" and b) akrasia.

Those seem to apply to me too. I'd never heard of akrasia before, what a great word. If I investigate this further among the LW community, I'll let you know.

Thanks so much for your thorough reply. I really, really appreciate it! Answers to your questions:

  • You're right. This is an incredibly difficult question. Based on the sample human terminal goals given, I think the biggest for me are health, joy, and curiosity. Can environmentalism be a terminal goal? What about efficiency in general?

  • My impulse to do ambitious things is about a 2 out of 10, so not very strong at all, and very manageable, currently. It used to be more like a 1 though, so the current trend seems to be that the older I get, the more attractive a life of accomplishment looks.

  • How happy would I be not pursuing ambition? You're right; I'm super happy right now. I have absolutely no idea if this happiness with a leisurely lifestyle is something I can maintain or not. My dad and his best friend are both super smart and not very ambitious, and seem to be quite happy even as they approach their 50's, which makes me think I could stay very happy. Then again, I might be different. Maybe my lack of ambition was just from the way I was raised (in my family, we all bragged about acing tests with no outside study, about never having homework, about never doing assigned readings, about skipping class to hang out in the rec room, etc.. kinda pathetic, in hindsight).

  • How big an impact would I have? If I knew this, things would be lots less fuzzy! One goal that I'd love to pursue would be promoting hitchhiking/slugging. This has to do with my other values of environmentalism and efficiency. I also think it would be wonderful if people were less fearful of strangers. I'm not sure how exactly I'd work toward this goal, so it's really hard to gauge potential impact. If it were successful though, traffic would be decongested, carbon emissions would be decreased, and people would save money on transportation and have more opportunities to interact with new people... so yeah, it could potentially have a significant impact.

  • Probability of success? Good question. No idea, again this is very fuzzy since I don't even know where I would start; it's just not something I've thought about much. I'd probably have to find someone to team up with who has more concrete skills. All I have is a general idea and a pretty logical mind, no relevant experience or education. I am usually pretty confident and anything I think I can do, I can do, but I normally don't set my sights too high.

  • How altruistic am I, really? I don't know. I'm still going through the repercussions of my deconversion. Right now, the amount of caring I have for people in the world is relative to the amount I used to have as a Christian. Now that eternity/an afterlife is out of the picture for me, I'm a little less frantic about saving the world and more content doing my own thing. Still, I think I care enough that if I were pursue a big goal or career, altruism would be my chief motivation.

  • The selfish reasons to be ambitious are significant too, I guess. Currently, I'm so happy with my leisurely life, it's hard to remember back to times when I had accomplishments, like academic awards and track and field records. Money I don't care about so much, but accomplishments feel really great, whether it's because of the personal satisfaction or the praise, it's hard to tell. I'm fairly confident that ambition would never be poison for me. The accomplishments I've had in life felt great, but they were really just side benefits of me pursuing other terminal goals, and as great as they were, they didn't give rise to any ambition.

    This was a good analysis; thank you! You're right that I really should put proportional thought into this.

Hmm, I'm not particularly well read in philosophy either, but I hear the term "moral relativism" thrown around a lot; mostly as a result of sharing my deconversion story actually, as a lot of people have commented that atheists almost have no choice but to be moral relativists. I think "moral relativism" is pretty simple and just means there is no "right" or "wrong" outside of an individual, and I think it's consistent with your views, but I'm not totally sure.

I wouldn't say so. The way I think of it is "less bad at achieving your ends".

Haha, okay, I hear you. Actually as soon as I typed that sentence, I realized this would be your response. It's just a different definition of "wrong" than I'm used to, but it makes sense.

My impression is that the community does have some "soft spots", and not wanting to believe in moral relativism sort of seems like it's one of them (based on what I remember when I read through the metaethics stuff.

Yeah. I think this is another topic that probably deserves more discussion among the community than it currently gets. If our society gets to be extremely rational (which I think most people here strongly desire), it will be really hard to draw the line between individual freedom and what's best for the future of humanity, and I think this is something worth serious thought.

Yeah, I get the same feeling about an anti-religion bias here. Your post about an afterlife is interesting. We really have no reason to believe in one, but without data, I definitely don't think people should assign near-certain probability to the non-existence of an afterlife, either. I guess I'm more of an agnostic, too. I don't believe in the Christian God, but like I said in my very first post, I can't be sure there isn't a good god or gods struggling against an evil god out there somewhere. It is possible, I just have no reason to believe it's true, so I don't really think about it.

Maybe there's a subconscious tendency to go along with the mainstream views on LW just because almost everyone here is so good at thinking and rational people usually tend to agree with other rational people. Personally, I discovered this site and thought, wow! So many people who think SO similarly to me, only they've been thinking much harder and for much longer... the general ideas around here must represent the most rational and least biased opinions on any topic. It's tempting to just trust that the ideas around here are all things I can agree with and understand, just because I've agreed with almost everything I've read so far.... I guess I just have to be cautious, keep putting in the effort of thinking for myself, and remember that LW is a (wonderful) resource, not a bible.

Comment author: adamzerner 10 April 2015 12:49:09AM 2 points [-]

A big part of the reason why I'm ambitious is because I try really hard to not fall victim to scope insensitivity. And regarding ambition, there's some really really big magnitudes at play. Ex.

  • Even a small increase in the chance that I don't die and get to live another bajillion years has a huge expected value(EV).
  • Same with altruism - even a small chance that I help billions of people has a huge EV.
  • Regarding my happiness, I think I may be lying to myself though. I think I rationalize that the same logic applies, that if I achieve some huge ambition there'd be a proportional increase in happiness. Because my brain likes to think achieving ambition -> goodness and I care about how much goodness gets achieved. But if I'm to be honest, that probably isn't true.

Another reason why I'm ambitious is more practical - I want to retire early, really ASAP. Starting a startup, making a lot of money and being able to retire would be great.

Comment author: hairyfigment 11 April 2015 06:59:18PM 1 point [-]

Re: afterlives - we have tons of data. Brain damage can cause loss of function in a way which varies depending on what part of the brain is damaged. Everything points towards total brain damage causing total lack of function. We also have evidence that stimulating one part of the brain can turn off consciousness, and some evidence that conscious experience requires many parts of the brain working together.

I posted about the first part of that somewhere, though apparently not in response to the linked post. Probably I did not respond to that one because I'd already made this point, and it gets tiring to see people ignoring it again.

Comment author: [deleted] 15 April 2015 05:29:16PM 2 points [-]

Oh, interesting, thanks for sharing! Data is good; that's cool that we know that, and I think I agree that it makes any afterlife extremely improbable. Sorry, I wasn't ignoring your point, I just noticed this now as a result of having just realized I can click on the little orange envelop and see replies and private messages.

Comment author: adamzerner 10 April 2015 12:35:59AM *  1 point [-]

Thanks so much for your thorough reply. I really, really appreciate it!

Glad to help (if I am actually helping)! I find this fun.

Can environmentalism be a terminal goal? What about efficiency in general?

Of course, anything can be a terminal goal :). But consider how strong a statement it is to say that something is a terminal goal. That it has intrinsic value. As for environmentalism, would the environment matter if there was no one on earth to experience it? If not, it makes me think that environmentalism matters to the extent that it makes peoples lives better, and thus would be an instrumental goal.

Some people would respond to what I just said by saying something along the lines of "Of course it wouldn't matter if no one was on earth, but don't be ridiculous - be practical." My response to that is that in discussing things like this, it's important to be very precise with what you say. Because a lot of disagreement comes from arguing over semantics, which comes from bad communication.

I have absolutely no idea if this happiness with a leisurely lifestyle is something I can maintain or not.

The good thing is that a) this is a testable question that you'll get more and more evidence for as time progresses, and b) you can easily adjust the extent to which you pursue ambitions. It's not like you have to decide once and for all now (not to imply you don't know that, just saying).

My guess is that you will be able to maintain your happiness.

a) The happiness set point theory seems rather accurate (my reason for thinking this is mostly based on anecdotal evidence, not on reading much into the research).

b) Anecdotally, it also seems to me that the "need to be ambitious" is also pretty set in stone. Ie. You know if you're one of those people, and you know somewhat early in life. I don't know of many 40 year olds who suddenly develop an irresistible urge to do something ambitious. Note: in HPMOR the distinction between having ambition and being ambitious is made.

Maybe my lack of ambition was just from the way I was raised (in my family, we all bragged about acing tests with no outside study, about never having homework, about never doing assigned readings, about skipping class to hang out in the rec room, etc.. kinda pathetic, in hindsight).

That's amazing! I hated school and did a lot of rebellious things out of spite. Back to that alternate universe again... I wish my family was like that. One of my favorite rebellious things was that I refused to do some AP Micro project at the end of the year because I was already in college, getting a zero would only bring my grade from an A to a B, and economics is all about incentives, so it just felt too right to boycott the project on those grounds.

Probability of success? Good question. No idea, again this is very fuzzy since I don't even know where I would start; it's just not something I've thought about much.

With respect, this seems like an ugh field (a more specific instance of what you said was a broader ugh field). P(success) seems like it plays a big role in whether or not you decide to be ambitious. I'm not sure though - if you thought you had a, say >50% chance of having a big impact on the world, would you then want to be ambitious?

If P(success) does indeed play a big role, I think it'd be a good idea to take an idea and give a real honest effort at seeing if you could "solve the puzzle". Try to break it down into it's components. What would have to happen in order for you to succeed? Break those components down further and ask the same question, etc. Honestly, try doing this for 5-10 ideas.

After doing this, you should have a much better sense of what P(success) is. Which has two benefits: 1) increases the chances you make the right decision as to whether or not to be ambitious, 2) will make you feel more confident in your decision, and perhaps more "at peace"/less likely to feel any sort of guilt.

How altruistic am I, really? I don't know. I'm still going through the repercussions of my deconversion. Right now, the amount of caring I have for people in the world is relative to the amount I used to have as a Christian.

Very understandable.

Now that eternity/an afterlife is out of the picture for me

WOAH!!! Slow down there :)

I really don't want to die and am really hoping that I won't have to. And I plan on doing what I can to avoid it. A lot of people here think similarly, and there seems to be reason to hope.

A lot of people are hopeful that we might not have to die. There's the possibility of cryonics working out, anti-aging research, AI (<- a very clear introduction to AI if you don't know much about it). And there's even the possibility that we have no clue how consciousness really works and that there is indeed an afterlife. Note that I used the word possible. I don't know how probable these things are. This talks a bit about it.

Maybe there's a subconscious tendency to go along with the mainstream views on LW just because almost everyone here is so good at thinking and rational people usually tend to agree with other rational people.

I think there is. But note the distinctions between types of conformity. Part of it is sensible. The fact that other smart people believe something to be true is evidence that it's true (in that it increases the likelihood that it's true). And so it makes sense to adjust your beliefs accordingly. The real question is "how much should you adjust your beliefs".

As for the bad types of conformity, I think it exists here too. My judgement is that it's moderately less than average.

It's tempting to just trust that the ideas around here are all things I can agree with and understand, just because I've agreed with almost everything I've read so far

I can definitely empathize with that. Discovering LW was one of the best things that's ever happened to me. I had that same sense of agreeing with almost everything I was reading, and it was really really nice to hear the thoughts articulated so well.

I guess I just have to be cautious, keep putting in the effort of thinking for myself, and remember that LW is a (wonderful) resource, not a bible.

Always :)

Comment author: [deleted] 13 April 2015 07:53:21PM 0 points [-]

You know it's funny, I've never thought about this before, but I actually would like for the earth to stay beautiful, even if there are no humans around to enjoy it. Feeling such a strong attachment to the earth makes me think that I empathize a bit too much with Kaczynski... which got me thinking about psychopathic tendencies, and after looking them up, I realized I borderline have many of them. I'm not really too worried about myself, but this got me back to morality again. Psychopaths are probably quite rational about pursuing their own personal terminal goals. You can't say they're doing anything wrong, can you? Is there anything you can really say to convince a rational psychopath who is smart enough to get away unpunished for his actions to act in a way that is better for society?

Anyway, yeah, I think you're right that I could maintain my happiness. I'll probably continue in this "phase" of life another 2-3 years at least, enjoy my free time, do a lot of reading, and start thinking harder about what to do with the rest of my life. My back-up plan is to become a cop/detective in the Bay Area, which would be somewhat physically and mentally engaging, offer great hours and benefits, allow me to retire on pension after 25 years, and pay enough that I would probably end up donating more than 10%... a fun, comfortable, guilt-free life, but definitely not something that would change the world or leave me filling immensely fulfilled. So, I'll take your suggestion and try to work out the pieces to the puzzle for a few more ambitious ideas.

A distinction between being ambitious and having ambition? Wow, I think I'm going to love that book.

One of my favorite rebellious things was that I refused to do some AP Micro project at the end of the year because I was already in college, getting a zero would only bring my grade from an A to a B, and economics is all about incentives

Yes!! Haha I did the same thing, when it came to final exams, some people would calculate the score they needed to bump their grades up one notch, but for me, every year, every class, even in college, my question was "What's the lowest score I can get and still get an A in the class?" If I knew I would get a C without studying, or I could do a quick 15 minute review and get an A, I wouldn't even do it. The effort I put into classes also correlated with how harsh a grader the teacher was. I actually had one teacher who believed in grading students according to their effort rather than according to their ability/final product compared to the rest of the class. I hated it, but in hindsight, I would have gotten a lot more out of school if all teachers had done that.

You're right about the probability of success being an ugh field. I like your idea about solving puzzles and high probabilities of success. I'll try breaking some ideas down, someday. Obviously higher P(success) correlates with a stronger desire to do something ambitious, but even if it were >50%, I would still be selfish enough to consider doing my own thing.

2) will make you feel more confident in your decision, and perhaps more "at peace"/less likely to feel any sort of guilt.

Either that or...well, you know. Maybe this is why it's an ugh field. I'm too happy living my leisurely life and subconsciously fear that if I find a high probability of success, I won't change anything, but will feel a more substantial amount of guilt.

Anyway, I found Scott's post about comparative advantage interesting and relevant. It was the first SSC post I read, which made me read tons of the archived posts, which eventually led me here. I know my strengths and weaknesses, but I really wish I knew my comparative advantage. Even if I did know, though, what if it wasn't nearly as fun as nannying? The post ends:

If everyone is legitimately a different person with a different brain and different talents and abilities, then all God gets to ask me is whether or not I was Scott Alexander.

God is definitely convenient here.

I really don't want to die and am really hoping that I won't have to. And I plan on doing what I can to avoid it. A lot of people here think similarly, and there seems to be reason to hope.

Oh, yeah! Not dying would be cool! I haven't read much about it yet, but I'm encouraged by the fact that people here are so hopeful. Right now, when I think about people dying and turning to dust, I think it sounds great, but really that's only relative to thinking about people dying and going to hell. Edit: I read the AI article you linked, and the part 2 afterwards, and it made the whole AI idea seem a lot more concrete/probable/exciting. Thanks! It makes me wonder, though, is it selfish to work on AI? The people working on it will die if they don't succeed, so personally they have nothing to lose even if they accidentally cause an early extinction of the human race. Then again, if we assume our eventual extinction is inevitable without AI, the overall risk-reward ratio favors research. Maybe I should consider giving some/all of my donations to MIRI. Thanks for bringing this all to my attention. Figuring out what I consider to be the probability of immortality should probably be more urgent than figuring out the probability of success in pursuing ambition, and like you mention, there will probably be some relation between the two.

The fact that other smart people believe something to be true is evidence that it's true (in that it increases the likelihood that it's true)

Yeah, this is intuitive. But I think we should be careful here not to look at an idea and ask, "What % of people who believe this are smart?" because the real question is, "Out of all the smart people who have seriously considered this idea, what % believe it?" It may be that some ideas are considered mainly by smart people, which would explain why a high percentage of people believing them are smart.

Comment author: adamzerner 13 April 2015 09:48:09PM *  1 point [-]

Psychopaths are probably quite rational about pursuing their own personal terminal goals.

I doubt it. In my experience, the average person is quite stupid. My thought is that the fact that they're a sociopath means that they have different goals, but not necessarily that they're more instrumentally rational (better at achieving your goals, whatever they are).

You can't say they're doing anything wrong, can you?

No, but I can say that I don't like them :)

Is there anything you can really say to convince a rational psychopath who is smart enough to get away unpunished for his actions to act in a way that is better for society?

Interesting question. If they genuinely prefer to cause harm to people, and if they really are instrumentally rational enough to only do things that help them achieve their goals, then no. But altruistic acts are one of the biggest correlates of happiness in normal people, so perhaps their psychopathy isn't set in stone and they could be convinced that there's a way to achieve more happiness.

allow me to retire on pension after 25 years

You may need a lot less money to retire than you'd think. Depending on how much you spend. The author argues (throughout the site) that a lot of spending is on essentially status-related goods, and that spending money on free time (indirectly) and security is more likely to lead to happiness (if you're the right type of person, but I sense that you are).

Anyway, I found Scott's post about comparative advantage interesting and relevant.

My thoughts on this are a bit unconventional. Most people use the term intelligence to refer to things like aptitude, working memory size and ability to remember things. I think that those things are overrated and that the ability to break things down like a reductionist is underrated. I started to write about it here, but am having trouble. I welcome any feedback (if you have any thoughts, please use Medium's side comments, it's really useful)

People used to ask me for writing advice. And I, in all earnestness, would say “Just transcribe your thoughts onto paper exactly like they sound in your head.” (from article)

Yes! Well, I think it's an oversimplification, but I very much agree with the direction of the advice. I hate formality. In school they give you all of these rules about how to write, and these rules seem to take you further and further away from how you actually speak. I always thought that these rules were bad, and I rebelled and got only average grades in writing even though I think I'm an amazing writer :)

Specifically, it’s whether I can say “No, I’m really not cut out to be Elon Musk” and go do something else I’m better at without worrying that I’m killing everyone in Canada.

That seems to be the central point the article is about, and also sort of what we're talking about. I actually don't even think there's that much to say. When I dissolve the topic, all I see is:

  1. Innate ability is a determinant of the EV of you pursuing an ambition. (How much so is a different topic)
  2. Different paths have different EV's of how much good they'll do. The paths you choose reflect your preference ratios.
  3. When I dissolve things like morality, all I see are preference ratios. And so once you know what your path choice says about your preference ratios, I don't feel like there's a leftover question of "but is it moral?".

I could add a lot of qualifiers to 1, 2 and 3, but I think you get what I'm saying so I won't.

Re: comparative advantage

It seems to me that people don't apply EV when calculating comparative advantages. Ie. they think about how much output they could generate right now rather than how much output they could be expected to generate over a period of time.

I'm a big believer that the ceiling of peoples' abilities is much higher than they think, and so taking this into account, my calculations of EV tend to be higher. Like, to people who say that they can't contribute to existential risk reduction, I'd say "How much do you think you could contribute if you studied really hard for 20 years?". And in calculating EV's for things like existential risk reduction, I think people fall victim to scope insensitivity. Even if you don't have a great chance at contributing, the magnitude of impact that a contribution would have is soooo great that it probably still leads to a high overall EV. Depending on preference ratios of course.

but really that's only relative to thinking about people dying and going to hell

I'm sorry you thought that. I can't imagine how horrifying that must be.

Edit: I read the AI article you linked, and the part 2 afterwards, and it made the whole AI idea seem a lot more concrete/probable/exciting

:) It was somewhat life changing for me. I actually understood it. Before reading that I just read a few things on LW, and didn't really understand it.

It makes me wonder, though, is it selfish to work on AI?

Yes! I think that selfishness is a huge component of the benefit of working on AI. After all you are one of the people who would benefit, and you have a lot to gain/lose. People don't seem to acknowledge this. But you would also be helping billions of currently living people, and bajillions of yet-to-be-born people, so for those reasons it's an incomprehensibly altruistic thing to do.

The people working on it will die if they don't succeed, so personally they have nothing to lose even if they accidentally cause an early extinction of the human race. Then again, if we assume our eventual extinction is inevitable without AI, the overall risk-reward ratio favors research.

I'm confused. If you assume that dying is bad, you have a lot to lose (proportional to the badness of dying). Are you considering death to be a neutral event?

Maybe I should consider giving some/all of my donations to MIRI.

To me that seems like a great option. Others seem to think so as well. Personally I don't know nearly enough about AI or the other options to be able to say with even moderate confidence.

Comment author: [deleted] 15 April 2015 05:14:46AM *  0 points [-]

I doubt it. In my experience, the average person is quite stupid.

Okay, yeah, I should have added the word some. Kaczynski is the only psychopath I've really read much about, so maybe I really did extrapolate his seeming rationality onto other psychopaths, even though we probably never hear about 99% of them. That would have to be some kind of bias; out of curiosity how would you label it? Maybe survivorship bias? Or availability heuristic? Anchoring? Or maybe even all of the above?

You may need a lot less money to retire than you'd think.

Believe me, I know. Even without trying to save money, I actually end up spending less on myself (excluding having paid for college) than on charity. Free hobbies are great. I didn't mean a pension was a reason to become a detective; it would just be a nice perk. Thanks for the link, though. Lots of good articles on that site!

Most people use the term intelligence to refer to things like aptitude, working memory size and ability to remember things. I think that those things are overrated and that the ability to break things down like a reductionist is underrated.

Well, I'm biased in favor of this idea, since I have an awful memory, but a pretty good ability (sometimes too good for my own good) to break things down like a reductionist and dissolve topics. I'll check out your post tomorrow and try to give some feedback.

even though I think I'm an amazing writer :)

I think so too!

I actually don't even think there's that much to say.

Nope, there's really not, but another thing I've realized from reading SSC is that a major component of great writing (and teaching) is the sharing of relevant, interesting, relatable examples to help an idea. If you skillfully parse through an idea, the audience will probably understand it at the time. But if you want the idea to actually sink in and stick with them, great examples are key. This is one reason I like Scott's posts so much; they actually affect my life. Personally, I was borderline cocky when I was younger (but followed social norms and concealed it). Then, I got older and started to read more and more, moved to the Bay Area, and met loads of smart people. Because of this, my self-esteem began to plummet, but I read that article just in time to stabilize it at a healthy, realistic level.

Anyway, Scott allows people to go easy on themselves for contributing less to the world than they might like, relative to their innate ability. Can we also go easy on ourselves relative to innate conscientiousness?

people fall victim to scope insensitivity

Yeah, this is sooo real. On a logical level, it's easy to recognize my scope insensitivity. On a "feeling" level, I still don't feel like I have to go out and do something about it. But I don't want to admit my preference ratios are that far out of whack; I don't want to be that selfish. Ugh. Now I feel like I should do something ambitious again, I'm so waffley about this. Thanks for all the help thinking through everything. This is BY FAR the best guidance anyone has ever given me in my life.

I'm confused. If you assume that dying is bad, you have a lot to lose (proportional to the badness of dying). Are you considering death to be a neutral event?

No... sorry, I was just working through my first thoughts about the idea, not making a meaningful point. Continuing on the selfishness idea, all I meant was that the researchers themselves would surely die eventually without AI, so even if AI made the world end a few years earlier for them, they personally have nothing to lose relative to what they could gain (dying a few years earlier vs. living forever). My first thought was "that's selfish, in a bad way, since they care less than the bajillions of still unborn people would about whether humans go extinct" but then I extrapolated the idea that the researcher would die without AI to the idea that humanity would eventually go extinct without AI and decided it was selfish in a good way.

Anyway, another question for you. You know how you said we care only about our own happiness? Have you read the part of the sequences/rationality book where Eliezer brings up someone being willing to die for someone else? If so, what did you make of it? If not, I'll go back and find exactly where it was.

Comment author: adamzerner 15 April 2015 02:40:26PM *  1 point [-]

Kaczynski is the only psychopath I've really read much about, so maybe I really did extrapolate his seeming rationality onto other psychopaths

I don't know too much about him other than the basics ("he argued that his bombings were extreme but necessary to attract attention to the erosion of human freedom necessitated by modern technologies requiring large-scale organization").

I think that his concerns are valid, but I don't see how the bombings help him achieve the goal of bumping humanity off that path. Perhaps he knew he'd get caught and his manifesto would get attention, but a) there's still a better way to achieve his goals, and b) he should have realized that people have a strong bias against serial killers.

The reason I think his concerns are valid is because capitalism tries to optimize for wanting, which is sometimes quite different from liking. And anecdotally, this seems to be a big problem.

That would have to be some kind of bias; out of curiosity how would you label it? Maybe survivorship bias? Or availability heuristic? Anchoring? Or maybe even all of the above?

I'm not sure what the bias is called :/. I know it exists and there's a formal name though. I know because I remember someone calling me out on it LWSH :)

Nope, there's really not, but another thing I've realized from reading SSC is that a major component of great writing (and teaching) is the sharing of relevant, interesting, relatable examples to help an idea.

Yes, I very much agree. At times I think the articles on LW fail to do this. Humans need to have their System 1's massaged in order to understand things intuitively.

Anyway, Scott allows people to go easy on themselves for contributing less to the world than they might like, relative to their innate ability. Can we also go easy on ourselves relative to innate conscientiousness?

Idk. This seems to be a question involving terminal goals. Ie. if you're asking whether our innate conscientiousness makes us "good" or "bad".

When I think of morality this is the/one question I think of: "What are the rules we'd ask people to follow in order to promote the happiest society possible?". I'm sure you could nitpick at that, but it should be sufficient for this conversation. Example: the law against killing is good because if we didn't have it, society would be worse off. Similarly, there are norms of certain preference ratios that lead to society being better off.

I don't think we'd be better off if the norm was to have, say equal preference ratios for everyone in the world. Doing so is very unnatural would be very difficult, if not impossible. You have to weigh the costs of going against our impulses against the benefits that marginal conscientiousness would bring.

I'm not sure where the "equilibrium" points are. Honestly, I think I'd be lying to myself if I said that a preference ratio of 1,000,000,000:1 for you over another human would be overall beneficial to society. I suspect that subsequent generations will realize this and look at us in a similar way we look at Nazis (maybe not that bad, but still pretty bad). Morality seems to "evolve" from generation to generation.

Personally, my preference ratios are pretty bad. Not as bad as the average person because I'm less scope insensitive, but still bad. Ex. I eat out once in a while. You might say "oh well that's reasonable". But I could eat brown rice and frozen vegetables for very cheap and be like 70% as satisfied, and pay for x meals for people that are quite literally starving.

But I continue to eat out once in a while, and honestly, I don't feel (that) bad about it. Because I accept that my preference ratios are where they are (pretty much), and I think it makes sense for me to pursue the goal of achieving my preferences. To be less precise and more blunt, "I accept that I'm selfish".

And so to answer your question:

Can we also go easy on ourselves relative to innate conscientiousness?

I think that the answer is yes. Main reason: because it's unreasonable to expect that you change your ratios much.

Yeah, this is sooo real. On a logical level, it's easy to recognize my scope insensitivity. On a "feeling" level, I still don't feel like I have to go out and do something about it.

It's great that you understand it on a logical level. No one has made much progress on the feeling level. As long as you're aware of the bias and make an effort to massage your "feeling level" towards being more accurate, you should be fine.

But I don't want to admit my preference ratios are that far out of whack; I don't want to be that selfish.

Why?

I think that answering that exploring and answering that question will be helpful.

Try thinking about it in two ways:

1) A rational analysis of what you genuinely think makes sense. Note that rational does not mean completely logically.

2) An emotional analysis of what you feel, why you feel it, and in the event that your feelings aren't accurate, how can you nudge them to be more accurate.

This is BY FAR the best guidance anyone has ever given me in my life.

Wow! Thanks for letting me know. I'm really happy to help. I've been really impressed with your ability to pursue things, even when it's uncomfortable. It's a really important ability and most people don't have it.

I think that not having that ability is often a bottleneck that prevents progress. Ex. an average person with that ability can probably make much more progress than a high IQ person without it (in some ways). It's nice to have a conversation that actually progresses along nicely.

Anyway, another question for you. You know how you said we care only about our own happiness? Have you read the part of the sequences/rationality book where Eliezer brings up someone being willing to die for someone else? If so, what did you make of it? If not, I'll go back and find exactly where it was.

I think I have. I remember it being one of the few instances where it seemed to me that Eliezer was misguided. Although:

1) I remember going through it quickly and not giving it nearly as much thought as I would like. I'm content enough with my current understanding, and busy enough with other stuff that I chose to put it off until later. Although I do notice confusion - I very well may just be procrastinating.

2) I have tremendous respect for Eliezer. And so I definitely take note of his conclusions. The following thoughts are a bit dark and I hesitate to mention them... but:

a) Consider the possibility that he does actually agree with me, but he thinks that what he wrote will have a more positive impact on humanity (by influencing readers)

b) In the case that he really does believe what he writes, consider that it may not be best to convince him otherwise. Ie. he seems to be a very influential person in the field of FAI, and it's very much in humanities interest for that person to be unselfish.

I haven't thought this through enough to make these points public, so please take note of that. Also, if you wouldn't mind summarizing/linking to where and why he disagrees with me, I'd very much appreciate it.

Edit: Relevant excerpt from HPMOR

They both laughed, then Harry turned serious again. "The Sorting Hat did seem to think I was going to end up as a Dark Lord unless I went to Hufflepuff," Harry said. "But I don't want to be one."

"Mr. Potter..." said Professor Quirrell. "Don't take this the wrong way. I promise you will not be graded on the answer. I only want to know your own, honest reply. Why not?"

Harry had that helpless feeling again. Thou shalt not become a Dark Lord was such an obvious theorem in his moral system that it was hard to describe the actual proof steps. "Um, people would get hurt?"

"Surely you've wanted to hurt people," said Professor Quirrell. "You wanted to hurt those bullies today. Being a Dark Lord means that people you want to hurt get hurt."

Sorry, I feel like I'm linking to too many things which probably feels overwhelming. Don't feel like you have to read anything. Just thought I'd give you the option.

Comment author: [deleted] 15 April 2015 09:06:52PM *  0 points [-]

b) he should have realized that people have a strong bias against serial killers.

Yeah, this was irrational. He should have remembered his terminal value of creating change instead of focusing on his instrumental value of getting as many people as possible to read his manifesto. -gives self a little pat on back for using new terminology-

The reason I think his concerns are valid is because capitalism tries to optimize for wanting

Could you please elaborate on this idea a little? Anyway, thanks for the link (don't apologize for linking so much, I love the links and read through and try to digest about 80% of them...). The liking/wanting difference is intuitive, but actually putting it into words is really helpful. I'm interested in exactly how you tie it in with Kaczynski, and I also think it's relevant to my current dilemma.

Anyway, Scott's example about smoking makes it seem as if people want to smoke but don't like it. I think it's the opposite; they like smoking, but don't want to smoke. Do I really have these two words backwards? We need definitions. I think "liking" has more to do with your preferences, while "wanting" has to do with your goals. I recognize in myself, that if I like something, it's very hard for me not to want it, and personally I find matrix-type philosophy questions to actually be difficult. That's why I've never tried smoking; I was scared I might like it and start to want it. Without having tried it, it's easy to say that it's not what I want for myself. Is this only because I think it would bring me less happiness in the long run? I don't think so. Even if you told me with certainty that smoking (or drugs) feels so incredibly good and is so incredibly fun that it could bring me happiness that outweighs the unhappiness caused by the bad stuff, I still wouldn't want it! And I have no idea why. Which makes me wonder... what if I had never experienced how wonderful a fun-filled mostly-hedonic lifestyle is? Would I truly want it? Or am I just addicted?

You might say "oh well that's reasonable". But I could eat brown rice and frozen vegetables for very cheap and be like 70% as satisfied, and pay for x meals for people that are quite literally starving.

Funny that you mention this example; I wouldn't say it's reasonable. Let me share a little story. When I was way younger, maybe 10 years ago, I went through a brief phase where I tried to convince my friends and family that eating at restaurants was wrong, saying "What if there were children in pain from starvation right outside the restaurant, and you knew the money you would spend in the restaurant could buy them rice and beans for two weeks... you would feel guilty about eating at the restaurant instead of helping, right? ("yes") This is your conscience, right? ("yes") Your conscience is from God, right? ("yes") People in Africa are just as important as people in the US, right? ("yes") Therefore, isn't wrong to eat at a restaurant instead of donating the money to help starving kids in Africa? ("no") Why? ("it just isn't!")... at which point they would insist that if I truly believed this was wrong, I should act accordingly, and I just told them "No, I can't, I'm too selfish... and besides, saving eternal souls is more important than feeding starving children." Then I looked at all the smart, unselfish adults I knew who still ate at restaurants, told myself I must be wrong somehow, and avoided thinking about the issue until we read Singer's Famine, Affluence, and Morality in college (In my final semester, this was the class where it first occurred to me that there was nothing wrong with putting effort into school beyond what was necessary for perfect grades). I was really excited when we read it and was eagerly anticipating discussing it the next class to finally hear if someone could give a solid refutation of my old idea. My professor cancelled class that day, and we never went back to the topic. I cared, but unfortunately not quite enough to go talk to my professor outside of class. That was for nerds. So I went on believing it was "wrong" to eat in restaurants, but to protect my sanity, didn't think about it or do anything about it, even after de-converting from Christianity... until I came across Scott's post Nobody Is Perfect, Everything is Commensurable which seems incredibly obvious in hindsight, yet was exactly what I needed to hear at the time.

I don't think we'd be better off if the norm was to have, say equal preference ratios for everyone in the world.

I disagree. I think we would be better off if society could somehow advance to a stage where such unselfishness was the norm. Whether this is possible is another question entirely, but I keep trying to rid myself of the habit of thinking natural = better (personally, I see this habit as another effect of Christianity; I'm continually amazed to find just how much of my worldview it shaped).

I think that answering that exploring and answering [Why don't I want selfish preference ratios?] will be helpful.

I want to answer this question with "because emotion!" Is this allowed? Or is it akin to "explaining" something by calling it an emergent phenomenon?

1) Rationally, I can't trace this back any farther than calling it a feeling. Was I born with this feeling? Is it the result of society? I don't know. I don't honestly think unselfish preference ratios would lead to a personal increase in my overall happiness, that's for sure. Take effective altruism, for example. When I donate money, I don't feel warm and fuzzy. I get a very small amount of personal satisfaction, societal respect, and a tiny reduction in the (already very small) guilt I feel for having such a good life. But honestly I rarely think about it, and I'm 99.99% sure the overall impact on my happiness is much smaller than if I were to use the money to fly to Guatemala and take a few weeks' vacation to visit old friends. Yet, even as I acknowledge this, I still want to donate. I don't know why. So I think that based solely on my intuition here, I might disagree with you and find personal happiness and altruism to be two separate terminal goals, often harmonious but sometimes conflicting.

2) Analyze emotion?? Can you do that?! As an istp, just identifying emotion is difficult enough.

As for your points about Eliezer...

a) Yeah, I have considered this too. But I think most of his audience is rational enough that if he said something that wasn't rational, his credibility could take a hit. Whether this would stop him and how much of a consequentialist he really is, I have no idea.

b) Yeah, this is an interesting microcosm of the issue of whether we want to believe what is true vs. what is best for society. That said, I'm not saying Eliezer is wrong. My intuition does take his side now, but I usually don't trust my intuitions very much.

Anyway, I went back through the book and found the title of the post. It's Terminal Values and Instrumental Values. You can jump to "Consider the philosopher."

Harry had that helpless feeling again. Thou shalt not become a Dark Lord was such an obvious theorem in his moral system that it was hard to describe the actual proof steps. "Um, people would get hurt?"

"Surely you've wanted to hurt people," said Professor Quirrell. "You wanted to hurt those bullies today. Being a Dark Lord means that people you want to hurt get hurt."

Good quote! Right now, I interpret this as showing how personal happiness and "altruism/not becoming a Dark Lord" are both inexplicable, perhaps sometimes competing terminal values... how do you interpret it?

Comment author: adamzerner 16 April 2015 01:32:08AM *  0 points [-]

Could you please elaborate on this idea a little? ... I'm interested in exactly how you tie it in with Kaczynski, and I also think it's relevant to my current dilemma.

Sure!

In brief: Kaczynski seems to have realized that economies are driven by wanting, not liking, and that this will lead to unhappiness. I think that that conclusion is too strong though - I'd just say that it'll lead to inefficiency.

Longer explanation: ok, so the economy is pretty much driven by what people choose to buy, and where people choose to work. People aren't always so good at making these choices. One reason is because they don't actually know what will make them happy.

  • Example: job satisfaction is important. There are lots of subtle things that influence job satisfaction. For example, there's something about things like farming that produces satisfaction and contentment. People don't value these things enough -> these jobs disappear -> people miss out on the opportunity to be satisfied and content.

Another reason why people aren't good at making choices is because they don't always have the willpower to do what they know they should.

  • Example: if people were smart, McDonalds wouldn't be the huge empire that it is. People choose to eat at McDonalds because they don't weigh the consequences it has on their future selves enough. The reason why McDonalds is huge is because tons of people make these mistakes. If people were smart, MealSquares and McDonalds would be flip-flopped.

Kaczynski seems to focus more on the first example, but I think they're both important. Economies are driven by the decisions we make. Given the predictable mistakes people make, society will suffer in predictable ways. Kaczynski seems to have realized this.

I avoided using the terms "wanting" and "liking" on purpose. I'll just say quickly words are just symbols that refer to things and as long as the two people are using the same symbol-thing mappings, it doesn't matter. What's important is that you seem to understand the distinction between the two things as far as wanting/liking goes. I do see what you mean about the term "wanting", and now that I think about it I agree with you.

(I've avoided elaboration and qualifiers in favor of conciseness and clarity. Let me know if you want me to say more.)

Edit: I'm about 95% sure that there's actual neuroscience research behind the wanting vs. liking thing. Ie. they've found distinct a brain area that corresponds to wanting, and they've found a different distinct brain area that corresponds to liking.

Note: I studied neuroscience in college. I did research in a lab where we studied vision in monkeys, and part of this involved stimulating the monkeys brain. There was a point where we were able to get the monkey to basically make any eye movement we want (based on where and how much we stimulated). It didn't provide me with any new information as far as free will goes, but literally seeing it in person with my own eyes influenced me on an emotional level.

That's why I've never tried smoking; I was scared I might like it and start to want it.

Interesting, I've never smoked, drank or done any drugs at all for similar reasons. Well, that's part of the story.

Would I truly want it? Or am I just addicted?

I'm going to guess that the reason why you wouldn't want to do drugs even if you knew they'd make you happy is because a) it'd sort of numb you away from thinking critically and making decisions, and b) you wouldn't get to do good for the world. Your current lifestyle doesn't seem to be preventing you from doing either of those.

"What if there were children in pain from starvation right outside the restaurant, and you knew the money you would spend in the restaurant could buy them rice and beans for two weeks... you would feel guilty about eating at the restaurant instead of helping, right?

:) I've proposed the same thought experiment except with buying diamonds. Eg. "Imagine that you go to the diamond store to buy a diamond, and there were x thousand starving kids in the parking lot who you could save if you spent the money on them instead. Would you still buy the diamond?"

And in the case of diamonds, it's not only a) the opportunity cost of doing good with the money - it's that b) you're supporting an inhumane organization and c) you're being victim to a ridiculous marketing scheme that gets you to pay tens of thousands of dollars for a shiny rock. The post Diamonds are Bullshit on Priceonomics is great.

Furthermore, people do a, b and c in the name of love. To me, that seems about as anti-love as it gets. Sorry, this is a pet peeve of mine. It's amazing how far you could push a human away from what's sensible. If I had an online dating profile, I think it'd be, "If you still think you'd want a diamond after reading this, then I hate you. If not, let's talk."

I know I haven't acknowledged the main counterargument, which is that the sacrifice is a demonstration of commitment, but there are ways of doing that without doing a, b and c.

Why? ("it just isn't!")

That sort of thinking baffles me as well. I've tried to explain to my parents what a cost-benefit analysis is... and they just don't get it. This post has been of moderate help to me because I understood what virtue ethics are after reading it (and I never understood what it is before reading it)

People who say "it just isn't" don't think in terms of cost-benefit analyses. They just have ideas about what is and isn't virtuous. As people like us have figured out, if you follow these virtues blindly, you'll run into ridiculousness and/or inconsistency.

However, this isn't to say that virtue-driven thinking doesn't have it's uses. Like all heuristics, they trade accuracy for speed, which sometimes is a worthy trade-off.

I disagree. I think we would be better off if society could somehow advance to a stage where such unselfishness was the norm.

I'm glad to hear you disagree :) But I sense that I may not have explained what I think and why I think it. If you could just flip a switch and make everyone have equal preference ratios, I think that'd probably be a good thing.

What I'm trying to say is that there is no switch, and that making our preference ratios more equal would be very difficult. Ex. try to make yourself care as much about a random accountant in China as much as you do about, say your Aunt. As far as cost-benefit analysis goes, the effort and unease of doing this would be a cost. I sense that the costs aren't always worth the benefits, and that given this, it's socially optimal for us to accept our uneven preference ratios to some extent. Thoughts?

Good quote! Right now, I interpret this as showing how personal happiness and "altruism/not becoming a Dark Lord" are both inexplicable, perhaps sometimes competing terminal values... how do you interpret it?

I interpret it as "Harry seems to think there are good reasons for choosing certain terminal values. Terminal values seem arbitrary to me."

Comment author: [deleted] 17 April 2015 06:08:28AM 0 points [-]

(I've avoided elaboration and qualifiers in favor of conciseness and clarity. Let me know if you want me to say more.)

Nope, your longer explanation was perfect, and now I understand, thanks. I'm just a little curious why you would say those things lead to inefficiency instead of unhappiness, but you don't have to elaborate any more here unless you feel like it.

Well, that's part of the story.

Again, now I'm slightly curious about the rest of it...

I'm going to guess that the reason why you wouldn't want to do drugs even if you knew they'd make you happy is because a) it'd sort of numb you away from thinking critically and making decisions, and b) you wouldn't get to do good for the world. Your current lifestyle doesn't seem to be preventing you from doing either of those.

Good guess. You're right. But (I initially thought) smoking would hardly prevent those things, and I still don't want to smoke. Then again, addiction could interfere with a), and the opportunity cost of buying cigarettes could interfere with b).

I've proposed the same thought experiment except with buying diamonds.

No way! A while back, I facebook-shared a very similar link about the ridiculousness of the diamond marketing scheme and proposed various alternatives to spending money on a diamond ring. I wasn't even aware that the organization was inhumane.. yikes, information like that should be common knowledge. Also, probably at least some people don't really want to get a diamond ring... but by the time the relationship gets serious, they can't get themselves to bring it up (girls don't want to be presumptuous, guys don't want to risk a conflict?) so yeah, definitely a good kind of thing to get out of the way in a dating profile, haha.

This post has been of moderate help to me because I understood what virtue ethics are after reading it.

Wow, that's so interesting, I'd never heard of virtue ethics before. I have many thoughts/questions about this, but let's save that conversation for another day so my brain doesn't suffer an overuse injury. My inner virtue-ethicist wants to become a more thoughtful person, but I know myself well enough to know that if I dive into all this stuff head first, it will just end up to be "a weird thinking phase I went through once" and instrumentally, I want to be thoughtful because of my terminal value of caring about the world. (My gut reaction: Virtues are really just instrumental values that make life convenient for people whose terminal values are unclear/intimidating. (Like how the author of the link chose loyalty as a virtue. I bet we could find a situation in which she would abandon that loyalty.) But I also think that there's a place for cost-benefit analysis even within virtue ethics, and that virtue ethicists with thoughtfully-chosen virtues can be more efficient consequentialists, which probably doesn't make much sense, but I'd like to be both, please!)

If you could just flip a switch and make everyone have equal preference ratios, I think that'd probably be a good thing...it's socially optimal for us to accept our uneven preference ratios to some extent. Thoughts?

Oh, yeah, that makes sense to me. Kind of like capitalism, it seems to work better in practice if we just acknowledge human nature. But gradually, as a society, we can shift the preference ratios a bit, and I think we maybe are. :) We can point to a decrease in imperialism, the budding effective altruism movement, or even veganism's growing popularity as examples of this shifting preference ratio.

Comment author: adamzerner 16 April 2015 01:22:27AM *  0 points [-]

For reference:

But I don't want to admit my preference ratios are that far out of whack; I don't want to be that selfish.

Why?

I think that exploring and answering that question will be helpful.

Try thinking about it in two ways:

1) A rational analysis of what you genuinely think makes sense. Note that rational does not mean completely logically.

2) An emotional analysis of what you feel, why you feel it, and in the event that your feelings aren't accurate, how can you nudge them to be more accurate.

You:

I want to answer this question with "because emotion!" Is this allowed?

Also:

Analyze emotion?? Can you do that?! As an istp, just identifying emotion is difficult enough.

Absolutely! That's how I'd start off. But the question I was getting at is "why does your brain produce those emotions". What is the evolutionary psychology behind it? What events in your life have conditioned you to produce this emotion?

By default, I think it's natural to give a lot of weight to your emotions and be driven by them. But once you really understand where they come from, I think it's easier to give them a more appropriate weight, and consequently, to better achieve your goals. (1,2,3)

And you could manipulate your emotions too. Examples: You'll be less motivated to go to the gym if you lay down on the couch. You'll be more motivated to go to the gym if you tell your friends that you plan on going to the gym every day for a month.

So I think that based solely on my intuition here, I might disagree with you and find personal happiness and altruism to be two separate terminal goals, often harmonious but sometimes conflicting.

So you don't think terminal goals are arbitrary? Or are you just proclaiming what yours are?

Edit:

But honestly I rarely think about it, and I'm 99.99% sure the overall impact on my happiness is much smaller than if I were to use the money to fly to Guatemala and take a few weeks' vacation to visit old friends. Yet, even as I acknowledge this, I still want to donate. I don't know why.

Are you sure that this has nothing to do with maximizing happiness? Perhaps the reason why you still want to donate is to preserve an image you have of yourself, which presumably is ultimately about maximizing your happiness.

(Below is a thought that ended up being a dead end. I was going to delete it, but then I figured you might still be interested in reading it.)

Also, an interesting thought occurred to me related to wanting vs. liking. Take a person who starts off with only the terminal goal of maximizing his happiness. Imagine that the person then develops an addiction, say to smoking. And imagine that the person doesn't actually like smoking, but still wants to smoke. Ie. smoking does not maximize his happiness, but he still wants to do it. Should he then decide that smoking is a terminal goal of his?

I'm not trying to say that smoking is a bad terminal goal, because I think terminal goals are arbitrary. What I am trying to say is that... he seems to be actually trying to maximize his happiness, but just failing at it.

DEAD END. That's not true. Maybe he is actually trying to maximize his happiness, maybe he isn't. You can't say whether he is or he isn't. If he is, then it leads you to say "Well if your terminal goal is ultimately to maximize your happiness... then you should try to maximize your happiness (if you want to achieve your terminal goals)." But if he isn't (just) trying to maximize happiness, he could add in whatever other terminal goals he wants. Deep down I still notice a bit of confusion regarding my conclusion that goals are arbitrary, and so I find myself trying to argue against it. But every time I do I end up reaching a dead end :/

Anyway, I went back through the book and found the title of the post. It's Terminal Values and Instrumental Values. You can jump to "Consider the philosopher."

Thank you! That does seem to be a/the key point in his article. Although "I value the choice" seems like a weird argument to me. I never thought of it as a potential counter argument. From what I can gather from Eliezer's cryptic rebuttal, I agree with him.

I still don't understand what Eliezer would say to someone that said, "Preferences are selfish and Goals are arbitrary".


1- Which isn't to imply that I'm good at this. Just that I sense that it's true and I've had isolated instances of success with it.

2 - And again, this isn't to imply that you shouldn't give emotions any weight and be a robot. I used to be uncomfortable with just an "intuitive sense" and not really understanding the reasoning behind it. Reading How We Decide changed that for me. 1) It really hit me that there is "reasoning" behind the intuitions and emotions you feel. Ie. your brain does some unconscious processing. 2) It hit me that I need to treat these feelings as Bayesian evidence and consider how likely it is that I have that intuition when the intuition is wrong vs. how likely it is that I have the intuition when the intuition is right.

3 - This all feels very "trying-to-be-wise-sounding", which I hate. But I don't know how else to say it.

Comment author: [deleted] 17 April 2015 06:10:29AM *  0 points [-]

Oops, just when I thought I had the terminology down. :( Yeah, I still think terminal values are arbitrary, in the sense that we choose what we want to live for.

So you think our preference is, by default, the happiness mind-state, and our terminal values may or may not be the most efficient personal happiness-increasers. Don't you wonder why a rational human being would choose terminal goals that aren't? But we sometimes do. Remember your honesty in saying:

Regarding my happiness, I think I may be lying to myself though. I think I rationalize that the same logic applies, that if I achieve some huge ambition there'd be a proportional increase in happiness. Because my brain likes to think achieving ambition -> goodness and I care about how much goodness gets achieved. But if I'm to be honest, that probably isn't true.

I have an idea. So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness? Like one human was born with an "altruism mutation" and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios? It's a pleasant thought, anyway.

But honestly, I literally didn't even know what evolution was until several weeks ago though, so I don't really belong bringing up any science at all yet; let me switch back to personal experience and thought experiments.

For example, let's say my preferences are 98% affected by selfishness and maybe 2% by altruism, since I'm very stingy with my time but less so with my money. (Someone who would die for someone else would have different numbers.) Anyway, on the surface I might look more altruistic because there is a LOT of overlap between decisions that are good for others and decisions that make me feel good. Or, you could see the giant overlap and assume I'm 100% selfish. When I donate to effective charities, I do receive benefits like liking myself a bit more, real or perceived respect from the world, a small burst of fuzzy feelings, and a decrease in the (admittedly small) amount of personal guilt I feel about the world's unfairness. But if I had to put a monetary value on the happiness return from a $1000 donation, it would be less than $1000. When I use a preference ratio and prefer other people's happiness, their happiness does make me happy, but there isn't a direct correlation between how happy it makes me and the extent to which I prefer it. So maybe preference ratios can be based mostly on happiness, but are sometimes tainted with a hint of genuine altruism?

Also, what about diminishing marginal returns with donating? Will someone even feel a noticable increase in good feelings/happiness/satisfaction giving 18% rather than 17%? Or could someone who earns 100k purchase equal happiness with just 17k and be free to spend the extra 1k on extra happiness in the form of ski trips or berries or something (unless he was the type to never eat in restaurants)? Edit: nevermind this paragraph, even if it's realistic, it's just scope insensitivity, right?

But similarly, let's say someone gives 12% of her income. Her personal happiness would probably be higher giving 10% to AMF and distributing 2% in person via random acts of kindness than it would giving all 12% to AMF. Maybe, you're thinking that this difference would affect her mind-state, that she wouldn't be able to think of himself as such a rational person if she did that. But who really values their self-image of being a rational opportunity-cost analyzer that highly? I sure don't (well, 99.99% sure anyway).

Sooo could real altruism exist in some people and affect their preference ratios just like personal happiness does, but to a much smaller extent? Look at (1) your quote about your ambition (2) my desire to donate despite my firm belief that the happiness opportunity cost outweighs the happiness benefits (3) people who are willing to die for others and terminate their own happiness (4) people who choose to donate via effective altruism rather than random acts of kindness

Anyway, if there was an altruism mutation somewhere along the way, and altruism could shape our preferences like happiness, it would be a bit easier to understand the seeming discrepancy between preferences and terminal goals, between likes and wants. Here I will throw out a fancy new rationalist term I learned, and you can tell me if I misunderstand it or am wrong to think it might apply here... occam's razor?

Anyway, in case this idea is all silly and confused, and altruism is a socially conditioned emotion, I'll attempt to find its origin. Not from giving to church (it was only fair that the pastors/teachers/missionaries get their salaries and the members help pay for building costs, electricity, etc). I guess there was the whole "we love because He first loved us" idea, which I knew well and regurgitated often, but don't think I ever truly internalized. I consciously knew I'd still care about others just as much without my faith. Growing up, I knew no one who donated to secular charity, or at least no one who talked about it. The only thing I knew that came close to resembling large-scale altruism was when people chose to be pastors and teachers instead of pursuing high-income careers, but if they did it simply to "follow God's will" I'm not sure it still counts as genuinely caring about others more than yourself. On a small-scale, my mom was really altruistic, like willing to give us her entire portion of an especially tasty food, offer us her jacket when she was cold too, etc... and I know she wasn't calculating cost-benefit ratios, haha. So I guess she could have instilled it in me? Or maybe I read some novels with altruistic values? Idk, any other ideas?

I still don't understand what Eliezer would say to someone that said, "Preferences are selfish and Goals are arbitrary".

I'm no Eliezer, but here's what I would say: Preferences are mostly selfish but can be affected by altruism, and goals are somehow based on these preferences. Whether or not you call them arbitrary probably depends on how you feel about free will. We make decisions. Do our internal mental states drive these decisions? Put in the same position 100 times, with the same internal mental state, would someone make the same decision every time, or would it be 50-50? We don't know, but either way, we still feel like we make decisions (well, except when it comes to belief, in my experience anyway) so it doesn't really matter too much.