els comments on On not getting a job as an option - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (187)
Yeah, this has gotten a little too tangled up in definitions. Let's try again, but from the same starting point.
Happiness=preferred mind-state (similar, potentially interchangeable terms: satisfaction, pleasure) Goodness=what leads to a happier outcome for others (similar, potentially interchangeable terms: morality, altruism)
I guess my whole idea is that goodness is kind of special. Most people seem born with it, to one extent or another. I think happiness and goodness are the two ultimate motivators. I even think they're the only two ultimate motivators. Or at least I can't think of any other supposed motivation that couldn't be traced back to one or both of these.
Pursuing a virtue like loyalty will usually lead to happiness and goodness. But is it really the ultimate motivator, or was there another reason behind this choice, i.e. it makes the virtue ethicist happy and she believes it benefits society? I'm guessing that in certain situations, the author might even abandon the loyalty virtue if it conflicted with the underlying motivations of happiness and goodness. Thoughts?
Edit: I guess I'm realizing the way you defined preference doesn't work for me either, and I should have said so in my other comment. I would say prefer simply means "tend to choose." You can prefer something that doesn't lead to the happiest mind-state, like a sacrificial death, or here's an imaginary example:
You have to choose: Either you catch a minor cold, or a mother and child you will never meet will get into a car accident. The mother will have serious injuries, and her child will die. Your memory of having chosen will be erased immediately after you choose regardless of your choice, so neither guilt nor happiness will result. You'll either suddenly catch a cold, or not.
Not only is choosing to catch a cold an inefficient happiness-maximizer like donating to effective charities, this time it will actually have a negative effect on your happiness mind-state. Can you still prefer that you catch a cold? According to what seems to me like common real-world usage of "prefer" you can. You are not acting in some arbitrary, irrational, inexplicable way in doing so. You can acknowledge you're motivated by goodness here, rather than happiness.
In a way, I think this is true. Actually, I should give more credit to this idea - yeah, it's true in an important way.
My quibble is that motivation is usually not rational. If it was, then I think you'd be right. But the way our brains produce motivation isn't rational. Sometimes we are motivated to do something... "just because". Ie. even if our brain knows that it won't lead to happiness or goodness, it could still produce motivation.
And so in a very real sense, motivation itself is often something that can't really be traced back. But I try really hard to respond to what people's core points are, and what they probably meant. I'm not precisely sure what your core point is, but I sense that I agree with it. That's the strongest statement I could make.
Unfortunately, I think my scientific background is actually harming me right now. We're talking about a lot of things that have very precise scientific meanings, and in some cases I think you're deviating from them a bit. Which really isn't too big a deal because I should be able to infer what you mean and progress the conversation, but I think I'm doing a pretty mediocre job of that. When I reflect, I find it difficult to deviate from the definitions I'm familiar with, which is sort of bad "conversational manners", because the only point of words in a conversation is to communicate ideas, and it'd probably be more efficient if I were better able to use other definitions.
Haha, you seem to be confused about virtue ethics in a good way :)
A true virtue ethicist would completely and fully believe that their virtue is inherently desirable, independent of anything and everything else. So a true virtue ethicist who values the virtue of loyalty wouldn't care whether the loyalty lead to happiness or goodness.
Now, I think that consequentialism is a more sensible position, and I think you do too. And in the real world, virtue ethicists often have virtues that include happiness and goodness. And if they run into a conflict between say the virtue of goodness and the one of loyalty, well I don't know how they'd resolve it, but I think they'd give some weight to each, and so in practice I don't think virtue ethicists end up acting too crazy, because they're stabilized by their virtues of goodness and happiness. On the other hand, a virtue ethicist without the virtue of goodness... that could get scary.
I hadn't thought about it before, but now that I do I think you're right. I'm not using the word "prefer" to mean what it really means. In my thought experiment I started off using it properly in saying that one mind-state is preferable to another.
But the error I made is in defining ACTIONS to be preferable because the resulting MIND-STATES are preferable. THAT is completely inconsistent with the way it's commonly used. In the way it's commonly used, an action is preferable... if you prefer it.
I'm feeling embarrassed that I didn't realize this immediately, but am glad to have realized it now because it allows me to make progress. Progress feels so good! So...
THANK YOU FOR POINTING THIS OUT!
Absolutely. But I think that I was wrong in an even more general sense than that.
So I think you understood what I was getting at with the thought experiment though - do you have any ideas about what words I should substitute in that would make more sense?
(I think that the fact that this is the slightest bit difficult is a huge failure of the english language. Language is meant to allow us to communicate. These are important concepts, and our language isn't giving us a very good way to communicate them. I actually think this is a really big problem. The linguistic-relativity hypothesis basically says that our language restricts our ability to think about the world, and I think (and it's pretty widely believed) that it's true to some extent (the extent itself is what's debated).)
Yay, agreement :)
Great point. I actually had a similar thought and added the qualifier "psychological" in my previous comment. Maybe "rational" would be better. Maybe there are still physical motivators (addiction, inertia, etc?) but this describes the mental motivators? Does this align any better with your scientific understanding of terminology? And don't feel bad about it, I'm sure the benefits of studying science outweigh the cost of the occasional decrease in conversation efficiency :)
Then I think very, very few virtue ethicists actually exist, and virtue ethicism is so abnormal it could almost qualify as a psychological disorder. Like the common ethics dilemma of exposing hidden Jews. If someone's virtue was "honesty" they would have to. (In the philosophy class I took, we resolved this dilemma by redefining "truth" and capitalizing; e.g. Timmy's father is a drunk. Someone asks Timmy if his father is a drunk. Timmy says no. Timmy told the Truth.) We whizzed through that boring old "correspondence theory" in ten seconds flat. I will accept any further sympathy you wish to express. Anyway, I think that any virtue besides happiness and goodness will have some loophole where 99% of people will abandon it if they run into a conflict between their chosen virtue and the deeper psychological motivations of happiness and goodness.
Edit: A person with extremely low concern for goodness is a sociopath. The amount of concern someone has for goodness as a virtue vs. amount of concern for personal happiness determines how altruistic she is, and I will tentatively call this a psychological motivation ratio, kind of like a preference ratio. And some canceling occurs in this ratio because of overlap.
Yes! I wish I could have articulated it that clearly for you myself.
Instead of saying we "prefer" an optimal mind-state... you could say we "like" it the most, but that might conflict with your scientific definitions for likes and wants. But here's an idea, feel free to critique it...
"Likes" are things that actually produce the happiest, optimal mind-states within us
"Wants" are things we prefer, things we tend to choose when influenced by psychological motivators (what we think will make us happy, what we think will make the world happy)
Some things, like smoking, we neither like (or maybe some people do, idk) nor want, but we still do because the physical motivators overpower the psychological motivators (i.e. we have low willpower)
Absolutely!! I'll check out that link.
Hmmm, so the question I'm thinking about is, "what does it mean to say that a motivation is traced back to something". It seems to me that the answer to that involves terminal and instrumental values. Like if a person is motivated to do something, but is only motivated to do it to the extent that it leads to the persons terminal value, then it seems that you could say that this motivation can be traced back to that terminal value.
And so now I'm trying to evaluate the claim that "motivations can always be traced back to happiness and goodness". This seems to be conditional on happiness and goodness being terminal goals for that person. But people could, and often do choose whatever terminal goals they want. For example, people have terminal goals like "self improvement" and "truth" and "be man" and "success". And so, I think that for a person with a terminal goal other than happiness and goodness, they will have motivations that can't be traced back to happiness or goodness.
But I think that it's often the case that motivations can be traced back to happiness and goodness. Hopefully that means something.
Wait... so the Timmy example was used to argue against correspondence theory? Ouch.
Perhaps. Truth might be an exception for some people. Ex. some people may choose to pursue the truth even if it's guaranteed to lead to decreases in happiness and goodness. And success might also be an exception for some people. They also may choose to pursue success even if it's guaranteed to lead to decreases in happiness and goodness. But this becomes a question of some sort of social science rather than of philosophy.
I like the concept! I propose that you call it an altruism ratio as opposed to a psychological motivation ratio because I think the former is less likely to confuse people.
Eh, I think that this would conflict with the way people use the word "like" in a similar way to the problems I ran into with "preference". For example, it makes sense to say that you like mind-state A more than mind-state B. But I'm not sure that it makes sense to say that you necessarily like action A more than action B, given the way people use the term "like". Damn language! :)
I had just reached the same conclusion myself! So I think that yeah, happiness and goodness are the only terminal values, for the vast majority of the thinking population :)
Note: I really don't like the term "happiness" to describe the optimal mind-state since I connect it too strongly with "pleasure" so maybe "satisfaction" would be better. I think of satisfaction as including both feelings of pleasure and feelings of fulfillment. What do you think?
I think that all these are really just instrumental goals that people subconsciously, and perhaps mistakenly, believe will lead them to their real terminal goals of greater personal satisfaction and/or an increase in the world's satisfaction.
It was an example of whatever convoluted theory my professor invented as a replacement for correspondence theory.
Exactly. I think people like the ones you mention are quite rare.
Ok, thanks :)
What if language isn't the problem? Maybe the connection between mind-states and actions isn't so clear-cut after all. If you like mind-state A more than mind-state B, then action A is mind-state-optimizing, but I'm not sure you can go much farther than that... because goodness.
:)
I haven't found a term that I really like. Utility is my favorite though.
Idk, I want to agree with you but I sense that it's more like 95% of the population. I know just the 2 people to ask though. My two friends are huge proponents of things like "give it your all" and "be a man".
Also, what about religious people? Aren't there things they value independent of happiness and goodness? And if so wouldn't their motivations reflect that?
Edit:
Friend 1 says it's ultimately about avoiding feeling bad about himself, which I classify as him wanting to optimize his mind-state.
Friend 2 couldn't answer my questions and said his decisions aren't that calculated.
Not too useful after all. I was hoping that they'd be more insightful.
Oooooo I like that term!
It seems clear-cut to me. An action leads to one state of the world, and in that state of the world you have one mind-state. Can you elaborate?
Not sure what you mean by that either.
Yeah, ask those friends if in a situation where "giving it their all" and "being men" made them less happy and made the world a worse place, whether they would still stick with their philosophies. And if they genuinely can't imagine a situation where they would feel less satisfied after "giving it their all," then I would postulate that as they're consciously pursuing these virtues, they're subconsciously pursuing personal satisfaction. (Edit: Just read a little further, that you already have their responses. Yeah, not too insightful, maybe I'll develop this idea a bit more and ask the rest of the LW community what they think.) (Edit #2: Thought about this a little more, and I have a question you might be able to answer. Is the subconscious considered psychological or physical?)
As for religious people...well, in the case of Christianity, people would probably just want to "become Christ-like" which, for them, overlaps really well with personal satisfaction and helping others. But in extreme cases, someone might truly aspire to "become obedient to X" in which case obedience could be the terminal value, even if the person doesn't think obedience will make them happy or make the world a better place. But I think that such ultra-religiosity is rare, and that most people are still ultimately psychologically motivated to either do what they think will make them happy, or what they think will make the world a better place. I feel like this is related to Belief in Belief but I can't quite articulate the connection. Maybe you'll understand, if not, I'll try harder to verbalize it.
No, if that's all you're saying, that "If you like mind-state A more than mind-state B, then action A is mind-state-optimizing" then I completely agree! For some reason, I read your sentence ("But I'm not sure that it makes sense to say that you necessarily like action A more than action B, given the way people use the term "like") and thought you were trying to say they necessarily like action A more..haha, oops
How about this answer: "If that makes me less happy and makes the world a worse place, the world would be decidedly weird in a lot of fundamental and ubiquitous ways. I am unable to comprehend what such a weird world would be like in enough detail to make meaningful statements about what I would do in it."
Let's just focus on "giving it your all." What is "it"?? You surely can't give everything your all. How do you choose which goals to pursue? "Giving it your all" is a bit abstract.
That's exactly what I asked them.
The first one took a little prodding but eventually gave a somewhat passable answer. And he's one of the smartest people I've ever met. The second one just refused to address the question. He said he wouldn't approach it that way and that his decisions aren't that calculated. I don't know how you want to explain it, but for pretty much every person I've ever met or read, sooner or later they seem to just flinch away from the truth. You seem to be particularly good at not doing that - I don't think you've demonstrated any flinching yet.
And see what I mean about how the ability to not flinch is often the limiting factor? In this case, the question wasn't really difficult in an intellectual way at all. It just requires you to make a legitimate effort to accept the truth. The truth is often uncomfortable to people, and thus they flinch away, don't accept it, and fail to make progress.
I could definitely answer that! This really gets at the core of the map vs. the territory (maybe my favorite topic :) ). The physical/psychological distinction are just two maps we use to describe reality. In reality itself, the territory, there's no such thing as physical/psychological. If you look at the properties of individual atoms, they don't have any sort of property that says "I'm a physical atom" or "I'm a psychological atom". They only have properties like mass and electric charge (as far as we know).
I'm not sure how much you know about science, but I find the physics-chemistry-biology spectrum to be a good demonstration of the different levels of maps. Physics tries to model reality as precisely as possible (well, some types of physics that is; others aim to make approximations). Chemistry approximates reality using the equations of physics. Biology approximates reality using the equations of chemistry. And you could even add psychology in there and say that it approximates reality using the ideas (not even equations) of biology.
As far as psychology goes, a little history might be helpful. It's been a few years since I studied this, but here we go. In the early 1900s, behaviorism was the popular approach to psychology. They just tried to look at what inputs lead to what outputs. Ie. they'd say "if we expose people to situation X, how do they respond". The input is the situation, and the output is how they respond.
Now, obviously there's something going on that translates the input to the output. They had the sense that the translation happens in the brain, but it was a black box to them and they had no clue how it works. Furthermore, they sort of saw it as so confusing that there's no way they could know how it works. And so behaviorists were content to just study what inputs lead to what outputs, and to leave the black box as a mystery.
Then in the 1950s there was the cognitive revolution where they manned up and ventured into the black box. They thought that you could figure out what's going on in there and how the inputs get translated to outputs.
Now we're almost ready to go back to your question - I haven't forgotten about it. So cognitive psychology is sort of about what's going on in our head and how we process stuff. Regarding the subconscious, even though we're not conscious of it, there's still processing going on in that black box, and so the study of that processing still falls under the category of cognitive psychology. But again, cognitive psychology is a high-level map. We're not there yet, but we'd be better able to understand that black box with a lower level map like neuroscience. And we'd be able to learn even more about the black box using an even lower level map like physics.
If you have any other questions or even just want to chat informally about this stuff please let me know. I love thinking about this stuff and I love trying to explain things (and I like to think I'm pretty good at it) and you're really good at understanding things and asking good questions which often leads me to think about things differently and learn new things.
Interesting. I had the impression that religious people had lots of other terminal values. So things like "obeying God" aren't terminal values? I had the impression that most religions teach that you should obey no matter what. That you should obey even if you think it'll lead to decreases in goodness and happiness. Could you clarify?
Edit: I just realized something that might be important. You emphasize the point that there's a lot of overlap between happiness/goodness and other potentially terminal values. I haven't been emphasizing it. I think we both agree that there is the big overlap. And I think we agree that "actions can either be mind-state optimizing, or not mind-state optimizing" and "terminal values are arbitrary".
I think you're right to put the emphasis on this and to keep bringing it up as an important reminder. Being important, I should have given it the attention it deserves. Thanks for persisting!
It took me a while to understand belief in belief. I read the sequences about 2 years ago and didn't understand it until a few weeks ago as I was reading HPMOR. There was a point when one of the characters said he believed something but acted as if he didn't. Like if believed what he said he believed, he definitely would have done X because X is clearly in his interest. I just reread belief in belief, and now I feel like it makes almost complete sense to me.
From what I understand, the idea with belief in belief is that:
a) There's your model of how you think the world will look.
b) And then there's what you say you believe.
To someone who values consistency, a) and b) should be the same thing. But humans are weird, and sometimes a) and b) are different.
In the scenario you describe, there's a religious person who ultimately wants goodness and would choose goodness over his virtues if he had to pick, but he nevertheless claims that his virtues are terminal goals to him. And so as far as a) goes, you both agree that he would choose goodness over his virtues. But as far as b) goes, you claim to believe different things. What he claims to believe is inconsistent with his model of the world, and so I think you're right - this would be an example of belief in belief.
Yup, that's all I'm trying to say. No worries if you misunderstood :). I hadn't realized that this was ultimately all I was trying to say before talking to you and now I have, so thank you!
Well, thanks! How does that saying go? What is true is already so? Although in the context of this conversation, I can't say there's anything inherently wrong with flinching; it could help fulfill someone's terminal value of happiness. It someone doesn't feel dissatisfied with himself and his lack of progress, what rational reason is there for him to pursue the truth? Obviously, I would prefer to live in a world where relentlessly pursuing the truth led everyone to their optimal mind-states, but in reality this probably isn't the case. I think "truth" is just another instrumental goal (it's definitely one of mine) that leads to both happiness and goodness.
Yeah! I think I first typed the question as "is it physical or psychological?" and then caught myself and rephrased, adding the word "considered" :) I just wanted to make sure I'm not using scientific terms with accepted definitions that I'm unaware of. Thanks for your answer!! You are really good at explaining stuff. I think the "cognitive psychology" is related to what I just read about last week in the ebook too, about neural networks, the two different brain map models, and the bleggs and rubes.
I don't know your religious background, but if you don't have one, that's really impressive, given that you haven't actually experienced much belief-in-belief since Santa (if you ever did). But yeah, basically, this sentences summarizes perfectly:
Any time a Christian does anything but pray for others, do faith-strengthening activities, spread the gospel, or earn money to donate to missionaries, he is anticipating as if God/hell doesn't exist. I realized this, and sometimes tried to convince myself and others that we were acting wrongly by not being more devout. I couldn't shake the notion that spending time having fun instead of praying or sharing the gospel was somehow wrong because it went against God's will of wanting all men being saved, and I believed God's will, by definition, was right. But I still acted in accordance with my personal happiness some of the time. I said God's will was the only an end-in-itself, but I didn't act like it. So like you said, inconsistency. Thanks for helping me with the connection there.
http://wiki.lesswrong.com/wiki/Litany_of_Gendlin
I agree with you that there's nothing inherently wrong with it, but I don't think this is a case of someone making a conscious decision to pursue their terminal goals. I think it's a case of "I'm just going to follow my impulse without thinking".
Haha thanks. I can't remember ever believing in belief, but studying this rationality stuff actually teaches you a lot about how other people think.
I was raised Jewish, but people around me were about as not religious as it gets. I think it's called Reform Judiasm. In practice it just means, "go to Hebrew school, have a Bar/Bat Mitzvah, celebrate like 3-4 holidays a year and believe whatever you want without being a blatant atheist".
I'm 22 years old and I genuinely can't remember the last time I believed in any of it through. I had my Bar Mitzvah when I was 13 and I remember not wanting to do it and thinking that it's all BS. Actually I think I remember being in Hebrew school one time when we were being taught about God and I at the time believed in God, and I was curious how they knew that God existed and I asked, and they basically just said, "we just know", and I remember being annoyed by that answer. And now I'm remembering being confused because I wanted to know what God really was, and some people told me he was human-like and had form, and some people just told me he was invisible.
I will say that I thoroughly enjoy Jewish humor though, and I thank the Jews very much for that :). Jews love making fun of their Jewish mannerisms, and it's all in good fun. Even things that might seem mean are taken in good spirit.
Hey, um... I have a question. I'm not sure if you're comfortable talking about it though. Please feel free to not answer.
It sounds really stressful believing that stuff. Like it seems that even people with the strongest faith spend some time deviating from those instructions and do things like have fun or pursue their personal interests. And then you'd feel guilty about that. Come to think of it, it sounds similar to my guilt for ever spending time not pursuing ambitions.
And what about believing in Hell? From what I understand, Christians believe that there's a very non-negligible chance that you end up in Hell, suffering unimaginably for eternity. I'm not exaggerating at all when I say that if I believed that, I would be in a mental hospital crying hysterically and trying my absolute hardest to be a good person and avoid ending up in Hell. Death is one of my biggest fears, and I also fear the possibility of something similar to Hell, even though I think it's a small possibility. Anyway, I never understood how people could legitimately believe in Hell and just go about their lives like everything is normal.
I've tried to clarify my thoughts a bit:
Terminal values are ends-in-themselves. They are psychological motivators, reasons that explain decisions. (Physical motivators like addiction and inertia can also explain our decisions, but a rational person might wish to overcome them.) For most people, the only true terminal values are happiness and goodness. There is almost always significant overlap between the two. Someone who truly has a terminal value that can't be traced back to happiness or goodness in some way is either (a) ultra-religious or (b) a special case for the social sciences.
Happiness ("likes") refers to the optimalness of your mind-state. Hedonistic pleasure and personal fulfillment are examples of things that contribute to happiness.
Goodness refers to what leads to a happier outcome for others.
Preferences ("wants") are what we tend to choose. These can be based on psychological or physical motivators.
Instrumental values are goals or virtues that we think will best satisfy the terminal values of happiness and goodness.
We are not always aware of what actually leads to optimal mind-states in ourselves and others.
Sounds good to me! Given the way you've defined things.
Edit: So what do you conclude about morality from this?
Good question. I conclude that morality (which, as far as I can tell, seems like the same thing as goodness and altruism) does exist, that our desire to be moral is the result of evolution (thanks for your scientific backup) just as much as our selfish desires are results of evolution. Whatever you call happiness, goodness falls into the same category. I think that some people are mystified when they make decisions that inefficiently optimize their happiness (like all those examples we talked about), but they shouldn't be. Goodness is a terminal value too.
Also, morality is relative. How moral you are can be measured by some kind of altruism ratio that compares your terminal values of happiness and goodness. Someone can be "more moral" than others in the sense that he would be motivated more by goodness/altruism than he is by his own personal satisfaction, relative to them.
Is there any value in this idea? No practical value, except whatever personal satisfaction value an individual assigns to clarity. I wouldn't even call the idea a conclusion as much as a way to describe the things I understand in a slightly more clear way. I still don't particularly like ends-in-themselves.
Reduction time:
Why should I pursue clarity or donate to effective charities that are sub-optimal happiness-maximizers?
Because those are instrumental values.
Why should I pursue these instrumental values?
Because they lead to happiness and goodness.
Why should I pursue happiness and goodness?
Because they're terminal values.
Why should I pursue these terminal values?
Wrong question. Terminal values, by definition, are ends-in-themselves. So here the real question is not why should I, but rather, why do I pursue them? It's because the alien-god of evolution gave us emotions that make us want to be happy and good...
Why did the alien-god give us emotions?
The alien-god does not act rationally. There is no "why." The origin of emotion is the result of random chance. We can explain only its propogation.
Why should we be controlled by emotions that originated through random chance?
Wrong question. It's not a matter of whether they should control us. It's a fact that they do.
I pretty much agree. But I have one quibble that I think is worth mentioning. Someone else could just say, "No, that's not what morality is. True morality is...".
Actually, let me give you a chance to respond to that before elaborating. How would you respond to someone who says this?
Very very well put. Much respect and applause.
One very small comment though:
I see where you're coming from with this. If someone else heard this out of context they'd think, "No... emotion originates from evolutionary pressure". But then you'd say, "Yeah, but where do the evolutionary pressures come from". The other person would say, "Uh, ultimately the big bang I guess." And you seem to be saying, "exactly, and that's the result of random chance".
Some math-y/physicist-y person might argue with you here about the big bang being random. I think you could provide a very valid bayesian counter argument saying that probability is in the mind, and that no one has a clue how the big bang/origin came to be, and so to anyone and everyone in this world, it is random.
Thanks :)
Yeah, I have no clue what evolutionary pressure means, or what the big-bang is, or any of that science stuff yet. sigh I really don't enjoy reading hard science all that much, but I enjoy ignorance even less, so I'll probably try to educate myself more about that stuff soon after I finish the rationality book.
Example case:
True morality is following God's will? Basically everyone who says this believes "God wants what's best for us, even when we don't understand it." Their understanding of God's will and their intuitive idea of what's best for people rarely conflict though. But here's an extreme example of when it could: Let's say someone strongly believes (even in belief) in God, and for some reason thinks that God wants him to sacrifice his child. This action would go against his (unrecognized) terminal value of goodness, but he could still do it, subconsciously satisfying his (unrecognized) terminal value of personal happiness. He takes comfort in his belief in God and heaven. He takes comfort in his community. To not sacrifice the child would be to deny God and lose that comfort. These thoughts obviously don't happen on a conscious level, but they could be intuitions?
Idk, feel free to throw more "true morality is..." scenarios at me...