Introduction

A few months ago, my friend said the following thing to me: “After seeing Divergent, I finally understand virtue ethics. The main character is a cross between Aristotle and you.”

That was an impossible-to-resist pitch, and I saw the movie. The thing that resonated most with me–also the thing that my friend thought I had in common with the main character–was the idea that you could make a particular decision, and set yourself down a particular course of action, in order to make yourself become a particular kind of person. Tris didn’t join the Dauntless cast because she thought they were doing the most good in society, or because she thought her comparative advantage to do good lay there–she chose it because they were brave, and she wasn’t, yet, and she wanted to be. Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’ 

(Tris did have a concept of some future world-outcomes being better than others, and wanting to have an effect on the world. But that wasn't the causal reason why she chose Dauntless; as far as I can tell, it was unrelated.)

My twelve-year-old self had a similar attitude. I read a lot of fiction, and stories had heroes, and I wanted to be like them–and that meant acquiring the right skills and the right traits. I knew I was terrible at reacting under pressure–that in the case of an earthquake or other natural disaster, I would freeze up and not be useful at all. Being good at reacting under pressure was an important trait for a hero to have. I could be sad that I didn’t have it, or I could decide to acquire it by doing the things that scared me over and over and over again. So that someday, when the world tried to throw bad things at my friends and family, I’d be ready.

You could call that an awfully passive way to look at things. It reveals a deep-seated belief that I’m not in control, that the world is big and complicated and beyond my ability to understand and predict, much less steer–that I am not the locus of control. But this way of thinking is an algorithm. It will almost always spit out an answer, when otherwise I might get stuck in the complexity and unpredictability of trying to make a particular outcome happen.


Virtue Ethics

I find the different houses of the HPMOR universe to be a very compelling metaphor. It’s not because they suggest actions to take; instead, they suggest virtues to focus on, so that when a particular situation comes up, you can act ‘in character.’ Courage and bravery for Gryffindor, for example. It also suggests the idea that different people can focus on different virtues–diversity is a useful thing to have in the world. (I'm probably mangling the concept of virtue ethics here, not having any background in philosophy, but it's the closest term for the thing I mean.)

I’ve thought a lot about the virtue of loyalty. In the past, loyalty has kept me with jobs and friends that, from an objective perspective, might not seem like the optimal things to spend my time on. But the costs of quitting and finding a new job, or cutting off friendships, wouldn’t just have been about direct consequences in the world, like needing to spend a bunch of time handing out resumes or having an unpleasant conversation. There would also be a shift within myself, a weakening in the drive towards loyalty. It wasn’t that I thought everyone ought to be extremely loyal–it’s a virtue with obvious downsides and failure modes. But it was a virtue that I wanted, partly because it seemed undervalued. 

By calling myself a ‘loyal person’, I can aim myself in a particular direction without having to understand all the subcomponents of the world. More importantly, I can make decisions even when I’m rushed, or tired, or under cognitive strain that makes it hard to calculate through all of the consequences of a particular action.

 

Terminal Goals

The Less Wrong/CFAR/rationalist community puts a lot of emphasis on a different way of trying to be a hero–where you start from a terminal goal, like “saving the world”, and break it into subgoals, and do whatever it takes to accomplish it. In the past I’ve thought of myself as being mostly consequentialist, in terms of morality, and this is a very consequentialist way to think about being a good person. And it doesn't feel like it would work. 

There are some bad reasons why it might feel wrong–i.e. that it feels arrogant to think you can accomplish something that big–but I think the main reason is that it feels fake. There is strong social pressure in the CFAR/Less Wrong community to claim that you have terminal goals, that you’re working towards something big. My System 2 understands terminal goals and consequentialism, as a thing that other people do–I could talk about my terminal goals, and get the points, and fit in, but I’d be lying about my thoughts. My model of my mind would be incorrect, and that would have consequences on, for example, whether my plans actually worked.

 

Practicing the art of rationality

Recently, Anna Salamon brought up a question with the other CFAR staff: “What is the thing that’s wrong with your own practice of the art of rationality?” The terminal goals thing was what I thought of immediately–namely, the conversations I've had over the past two years, where other rationalists have asked me "so what are your terminal goals/values?" and I've stammered something and then gone to hide in a corner and try to come up with some. 

In Alicorn’s Luminosity, Bella says about her thoughts that “they were liable to morph into versions of themselves that were more idealized, more consistent - and not what they were originally, and therefore false. Or they'd be forgotten altogether, which was even worse (those thoughts were mine, and I wanted them).”

I want to know true things about myself. I also want to impress my friends by having the traits that they think are cool, but not at the price of faking it–my brain screams that pretending to be something other than what you are isn’t virtuous. When my immediate response to someone asking me about my terminal goals is “but brains don’t work that way!” it may not be a true statement about all brains, but it’s a true statement about my brain. My motivational system is wired in a certain way. I could think it was broken; I could let my friends convince me that I needed to change, and try to shoehorn my brain into a different shape; or I could accept that it works, that I get things done and people find me useful to have around and this is how I am. For now. I'm not going to rule out future attempts to hack my brain, because Growth Mindset, and maybe some other reasons will convince me that it's important enough, but if I do it, it'll be on my terms. Other people are welcome to have their terminal goals and existential struggles. I’m okay the way I am–I have an algorithm to follow.

 

Why write this post?

It would be an awfully surprising coincidence if mine was the only brain that worked this way. I’m not a special snowflake. And other people who interact with the Less Wrong community might not deal with it the way I do. They might try to twist their brains into the ‘right’ shape, and break their motivational system. Or they might decide that rationality is stupid and walk away.

On Terminal Goals and Virtue Ethics
New Comment
207 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"Good people are consequentialists, but virtue ethics is what works," is what I usually say when this topic comes up. That is, we all think that it is virtuous to be a consequentialist and that good, ideal rationalists would be consequentialists. However, when I evaluate different modes of thinking by the effect I expect them to have on my reasoning, and evaluate the consequences of adopting that mode of thought, I find that I expect virtue ethics to produce the best adherence rate in me, most encourage practice, and otherwise result in actually-good outcomes.

But if anyone thinks we ought not to be consequentialists on the meta-level, I say unto you that lo they have rocks in their skulls, for they shall not steer their brains unto good outcomes.

[-]Ruby280

If ever you want to refer to an elaboration and justification of this position, see R. M. Hare's two-level utilitarianism, expounded best in this paper: Ethicial Theory and Utilitarianism (see pp. 30-36).

To argue in this way is entirely to neglect the importance for moral philosophy of a study of moral education. Let us suppose that a fully informed archangelic act-utilitarian is thinking about how to bring up his children. He will obviously not bring them up to practise on every occasion on which they are confronted with a moral question the kind of arch angelic thinking that he himself is capable of [complete consequentialist reasoning]; if they are ordinary children, he knows that they will get it wrong. They will not have the time, or the information, or the self-mastery to avoid self-deception prompted by self-interest; this is the real, as opposed to the imagined, veil of ignorance which determines our moral principles.

So he will do two things. First, he will try to implant in them a set of good general principles. I advisedly use the word 'implant'; these are not rules of thumb, but principles which they will not be able to break without the greatest repugnance, and who

... (read more)
4kybernetikos
That's very interesting, but isn't the level-1 thinking closer to deontological ethics than virtue ethics, since it is based on rules rather than on the character of the moral agent?
3Ruby
My understanding is that when Hare says rules or principles for level-1 he means it generically and is agnostic about what form they'd take. "Always be kind" is also a rule. For clarity, I'd substitute the word 'algorithm' for 'rules'/'principles'. Your level-2 algorithm is consequentialism, but then your level-1 algorithm is whatever happens to consequentially work best - be it inviolable deontological rules, character-based virtue ethics, or something else.
2ialdabaoth
level-1 thinking is actually based on habit and instinct more than rules; rules are just a way to describe habit and instinct.
1Ruby
Level-1 is about rules which your habit and instinct can follow, but I wouldn't say they're ways to describe it. Here we're talking about normative rules, not descriptive System 1/System 2 stuff.
1kybernetikos
And the Archangel has decided to take some general principles (which are rules) and implant them in the habit and instinct of the children. I suppose you could argue that the system implanted is a deontological one from the Archangels point of view, and merely instinctual behaviour from the childrens point of view. I'd still feel that calling instinctual behaviour 'virtue ethics' is a bit strange.
1ialdabaoth
not quite. The initial instincts are the system-1 "presets". These can and do change with time. A particular entity's current system-1 behavior are its "habits".
5jphaas
Funny, I always thought it was the other way around... consequentialism is useful on the tactical level once you've decided what a "good outcome" is, but on the meta-level, trying to figure out what a good outcome is, you get into questions that you need the help of virtue ethics or something similar to puzzle through. Questions like "is it better to be alive and suffering or to be dead", or "is causing a human pain worse than causing a pig pain", or "when does it become wrong to abort a fetus", or even "is there good or bad at all?"
8CCC
I think that the reason may be that consequentionalism requires more computation; you need to re-calculate the consequences for each and every action. The human brain is mainly a pattern-matching device - it uses pattern-matching to save on computation cycles. Virtues are patterns which lead to good behaviour. (Moreover, these patterns have gone through a few millenia of debugging - there are plenty of cautionary tales about people with poorly chosen virtues to serve as warnings). The human brain is not good at quickly recalcuating long-term consequences from small changes in behaviour.
3Armok_GoB
What actually happens is you should be consequential at even-numbered meta-levels and virtue-based on the odd numbered ones... or was it the other way around? :p
1TheAncientGeek
Say I apply consequentialism to a set of end states I can reliably predict, and use something else for the set I cannot. In what sense should I be a consequentialist about the second set?
2ialdabaoth
In the sense that you can update on evidence until you can marginally predict end states? I'm afraid I can't think of an example where there's a meta-level but on predictive capacity on that meta-level. Can you give an example?
1TheAncientGeek
I have no hope of being able to predict everything...there is always going to be a large set of end states I can't predict?
1ialdabaoth
Then why have ethical opinions about it at all? Again, can you please give an example of a situation where this would come up?
-1TheAncientGeek
Lo! I have been so instructed-eth! See above.
-10TruePath
[-][anonymous]220

I am going to write the same warning I have written to rationalist friends in relation to the Great Filter Hypothesis and almost everything on Overcoming Bias: BEWARE OF MODELS WITH NO CAUSAL COMPONENTS! I repeat: BEWARE NONCAUSAL MODELS!!! In fact, beware of nonconstructive mental models as well, while we're at it! Beware classical logic, for it is nonconstructive! Beware noncausal statistics, for it is noncausal and nonconstructive! All these models, when they contain true information, and accurately move that information from belief to belief in strict accordance with the actual laws of statistical inference, still often fail at containing coherent propositions to which belief-values are being assigned, and at corresponding to the real world.

Now apply the above warning to virtue ethics.

Now let's dissolve the above warning about virtue ethics and figure out what it really means anyway, since almost all of us real human beings use some amount of it.

It's not enough to say that human beings are not perfectly rational optimizers moving from terminal goals to subgoals to plans to realized actions back to terminal goals. We must also acknowledge that we are creatures of muscle an... (read more)

I will reframe this to make sure I understand it:

Virtue Ethics is like weightlifting. You gotta hit the gym if you want strong muscles. You gotta throw yourself into situations that cultivate virtue if you want to be able to act virtuously.

Consequentialism is like firefighting. You need to set yourself up somewhere with a firetruck and hoses and rebreathers and axes and a bunch of cohorts who are willing to run into a fire with you if you want to put out fires.

You can't put out fires by weightlifting, but when the time comes to actually rush into a fire, bust through some walls, and drag people out, you really should have been hitting the gym consistently for the past several months.

[-][anonymous]120

That's such a good summary I wish I'd just written that instead of the long shpiel I actually posted.

9ialdabaoth
Thanks for the compliment! I am currently wracking my brain to come up with a virtue-ethics equivalent to the "bro do you even lift" shorthand - something pithy to remind people that System-1 training is important to people who want their System-1 responses to act in line with their System-2 goals.

something pithy

Rationalists should win?

Maybe with a sidenote how continuously recognizing in detail how you failed to win just now is not winning.

1KnaveOfAllTrades
'Do you even win [bro/sis/sib]?'
3Sabiola
How about 'Train the elephant'?
1Leon
Here's how I think about the distinction on a meta-level: "It is best to act for the greater good (and acting for the greater good often requires being awesome)." vs. "It is best to be an awesome person (and awesome people will consider the greater good)." where ''acting for the greater good" means "having one's own utility function in sync with the aggregate utility function of all relevant agents" and "awesome" means "having one's own terminal goals in sync with 'deep' terminal goals (possibly inherent in being whatever one is)" (e.g. Sam Harris/Aristotle-style 'flourishing').
1ialdabaoth
So arete, then?
8bramflakes
Can you explain this part more?
9[anonymous]
With pleasure! Ok, so the old definition of "knowledge" was "justified true belief". Then it turned out that there were times when you could believe something true, but have the justification be mere coincidence. I could believe "Someone is coming to see me today" because I expect to see my adviser, but instead my girlfriend shows up. The statement as I believed it was correct, but for a completely different reason than I thought. So Alvin Goldman changed this to say, "knowledge is true belief caused by the truth of the proposition believed-in." This makes philosophers very unhappy but Bayesian probability theorists very happy indeed. Where do causal and noncausal statistical models come in here? Well, right here, actually: Bayesian inference is actually just a logic of plausible reasoning, which means it's a way of moving belief around from one proposition to another, which just means that it works on any set of propositions for which there exists a mutually-consistent assignment of probabilities. This means that quite often, even the best Bayesians (and frequentists as well) construct models (let's switch to saying "map" and "territory") which not only are not caused by reality, but don't even contain enough causal machinery to describe how reality could have caused the statistical data. This happens most often with propositions of the form "There exists X such that P(X)" or "X or Y" and so forth. These are the propositions where belief can be deduced without constructive proof: without being able to actually exhibit the object the proposition applies to. Unfortunately, if you can't exhibit the object via constructive proof (note that constructive proofs are isomorphic to algorithms for actually generating the relevant objects), I'm fairly sure you cannot possess a proper description of the causal mechanisms producing the data you see. This means that not only might your hypotheses be wrong, your entire hypothesis space might be wrong, which could make your in
[-]Jiro130

So Alvin Goldman changed this to say, "knowledge is true belief caused by the truth of the proposition believed-in." This makes philosophers very unhappy but Bayesian probability theorists very happy indeed.

If I am insane and think I'm the Roman emperor Nero, and then reason "I know that according to the history books the emperor Nero is insane, and I am Nero, so I must be insane", do I have knowledge that I am insane?

2drnickbone
Note that this also messes up counterfactual accounts of knowledge as in "A is true and I believe A; but if A were not true then I would not believe A". (If I were not insane, then I would not believe I am Nero, so I would not believe I am insane.) We likely need some notion of "reliability" or "reliable processes" in an account of knowledge, like "A is true and I believe A and my belief in A arises through a reliable process". Believing things through insanity is not a reliable process. Gettier problems arise because processes that are usually reliable can become unreliable in some (rare) circumstances, but still (by even rarer chance) get the right answers.
3Jiro
The insanity example is not original to me (although I can't seem to Google it up right now). Using reliable processes isn't original, either, and if that actually worked, the Gettier Problem wouldn't be a problem.
1Friendly-HI
Interesting thought but surely the answer is no. If I take the word "knowledge" in this context to mean having a model that reasonably depicts reality in its contextually relevant features, then the same model of what the word "insane" in this specific instance depicts two very different albeit related brain patterns. Simply put the brain pattern (wiring + process) that makes the person think they are Nero is a different though surely related physical object than the brain pattern that depicts what that person thinks "Nero being insane" might actually manifest like in terms of beliefs and behaviors. In light of the context we can say the person doesn't have any knowledge about being insane, since that person's knowledge does not include (or take seriously) the belief that depicts the presumably correct reality/model of that person not actually being Nero. Put even simpler we use the same concept/word to model two related but fundamentally different things. Does that person have knowledge about being insane? It's the tree and the sound problem, the word insane is describing two fundamentally different things yet wrongfully taken to mean the same. I'd claim any reasonable concept of the word insane results in you concluding that that person does not have knowledge about being insane in the sense that is contextually relevant in this scenario, while the person might have actually roughly true knowledge about how Nero might have been insane and how that manifested itself. But those are two different things and the latter is not the contextually relevant knowledge about insanity here.
1Jiro
I don't think that explanation works. One of the standard examples of the Gettier problem is, as eli described, a case where you believe A, A is false, B is true, and the question is "do you have knowledge of (A OR B)". The "caused by the truth of the proposition" definition is an attempt to get around this. So your answer fails because it doesn't actually matter that the word "insane" can mean two different things--A is "is insane like Nero", B is "is insane in the sense of having a bad model", and "A OR B" is just "is insane in either sense". You can still ask if he knows he's insane in either sense (that is, whether he knows "(A OR B)", and in that case his belief in (A OR B) is caused by the truth of the proposition.
4Eugine_Nier
Yes it is causal in the same sense that mathematics of physical laws are causal. You do realize the two explanations aren't contradictory and are in fact mutually reinforcing? In particular, the man wants to have sex with here and is engaging in status signalling games to accomplish his goal. Also his reasons for wanting to have sex with her may also include signaling and status.
4bramflakes
? If the Filter is real, then its effects are what causes us to think of it as a hypothesis. That makes it "true belief caused by the truth of the proposition believed-in", conditional on it actually being true. I don't get it.
1[anonymous]
That could only be true if it lay in our past, or in the past of the other Big Finite Number of other species in the galaxy it already killed off. The actual outcome we see is just an absence of Anyone Else detectable to our instruments so far, despite a relative abundance of seemingly life-capable planets. We don't see particular signs of any particular causal mechanism acting as a Great Filter, like a homogenizing swarm expanding across the sky because some earlier species built a UFAI or something. When we don't see signs of any particular causal mechanism, but we're still not seeing what we expect to see, I personally would say the first and best explanation is that we are ignorant, not that some mysterious mechanism destroys things we otherwise expect to see.
1bramflakes
Hm? Why doesn't Rare Earth solve this problem? We don't have the tech yet to examine the surfaces of exoplanets so for all we know the foreign-Earth candidates we've got now will end up being just as inhospitable as the rest of them. "Seemingly life capable" isn't a very high bar at the minute. Now, if we did have the tech, and saw a bunch of lifeless planets that as far as we know had nearly exactly the same conditions as pre-Life Earth, and people started rattling off increasingly implausible and special-pleading reasons why ("no planet yet found has the same selenium-tungsten ratio as Earth!"), then there'd be a problem. I don't see why you need to posit exotic scenarios when the mundane will do.
2[anonymous]
Neither do I, hence my current low credence in a Great Filter and my currently high credence for, "We're just far from the mean; sometimes that does happen, especially in distributions with high variance, and we don't know the variance right now."
1bramflakes
Well I agree with you on all of that. How is it non-causal? Or have I misunderstood and you only object to the "aliens had FOOM AI go wrong" explanations but have no trouble with the "earth is just weird" explanation?
2[anonymous]
It isn't. The people who affirmatively believe in the Great Filter being a real thing rather than part of their ignorance are, in my view, the ones who believe in a noncausal model.
-1Friendly-HI
The problem with the signaling hypothesis is that in everyday life there is essentially no observation you could possibly make that could disprove it. What is that? This guy is not actually signaling right now? No way, he's really just signaling that he is so über-cool that he doesn't even need to signal to anyone. Wait there's not even anyone else in the room? Well through this behavior he is signaling to himself how cool he is to make him believe it even more. Guess the only way to find out is if we can actually identify "the signaling circuit" and make functional brain scans. I would actually expect signaling to explain an obscene amount of human behavior... but really everything? As I said I can't think of any possible observation outside of functional brain scans we could potentially make that could have the potential to disprove the signaling hypothesis of human behavior. (A brain scan where we actually know what we are looking at and where we are measuring the right construct obviously).
3lmm
Thanks for pushing this. I nodded along to the grandparent post and then when I came to your reply I realized I had no idea what this part was talking about.
1TheAncientGeek
It is not enough to say we don't move smoothly from terminal goal to subgoal. It is enough to say we are too mesilly constructed to have distinct terminal goals and subgoals.
1Benquo
It sounds like you're thinking of the "true utility function's" preferences as a serious attempt to model the future consequences of present actions, including their effect on future brain-states. I don't think that's always how the brain works, even if you can tell a nice story that way.
4[anonymous]
I think that's usually not how the brain works, but I also think that I'm less than totally antirational. That is, it's possible to construct a "true utility function" that would dictate to me a life I will firmly enjoy living. That statement has a large inferential distance from what most people know, so I should actually hurry up and write the damn LW entry explaining it.
3Nornagest
I think you could probably construct several mutually contradictory utility functions which would dictate lives you enjoy living. I think it's even possible that you could construct several which you'd perceive as optimal, within the bounds of your imagination and knowledge. I don't think we yet have the tools to figure out which one actually is optimal. And I'm pretty sure the latter aren't a subset of the former; we see plenty of people convincing themselves that they can't do better than their crappy lives.
3[anonymous]
Well that post happened.
3[anonymous]
Like I said: there's a large inferential distance here, so I have an entire post on the subject I'm drafting for notions of construction and optimality.

I've thought for a while that Benjamin Franklin's virtue-matrix technique would be an interesting subject for a top-level article here, as a practical method for building ethical habits. We'd likely want to use headings other than Franklin's Puritan-influenced ones, but the method itself should still work:

I made a little book, in which I allotted a page for each of the virtues. I ruled each page with red ink, so as to have seven columns, one for each day of the week, marking each column with a letter for the day. I crossed these columns with thirteen red lines, marking the beginning of each line with the first letter of one of the virtues, on which line, and in its proper column, I might mark, by a little black spot, every fault I found upon examination to have been committed respecting that virtue upon that day.

I can think of some potential pitfalls, though (mostly having to do with unduly accentuating the negative), and I don't want to write on it until I've at least tried it.

8ialdabaoth
What are good Virtues to aspire to? My inner RPG-geek is nudging me towards the ones from Exalted: * Temperance (aka 'Self Control') * Compassion (Altruism / Justice / Empathy) * Valour (Courage / Bravery / Openness) * Conviction (Conscientiousness / Resolve / 'Grit').

Exalted is the only RPG into whose categories I am never tempted to put myself. I can easily make a case for myself as half the Vampire: The Masquerade castes, or almost any of the Natures and Demeanors from the World of Darkness; but the different kinds of Solar, or even the dichotomy between Solar / Lunar / Infernal / Abyssal / etcetera, just leave me staring at what feels to me like a Blue and Orange Morality.

I credit them for this; it means they're not just using the Barnum effect. The Exalted universe is genuinely weird.

4ialdabaoth
Very, VERY much so. Especially when you start getting into Rebecca Borgstrom/Jenna Moran's contributions. (I think it says something weird about my mind that I DO identify with the Primordials, which are specifically eldritch sapiences beyond mortal ken, more than I identify with any of the 'normal' WoD stuff.)
7Eliezer Yudkowsky
(skeptical look) Name three.
  1. She-Who-Lives-In-Her-Name, flawed embodiment of perfection, who shattered Her perfected hierarchy to stave off the rebellion of Substance over Form. Creation was mathematically Perfect. But if Creation was Perfect, then how could any of this have happened? But She remembers being Perfect, and She designed Creation to be Perfect. If only She was still Perfect, She could remember why it was possible that this happened. There's something profound about recursion that She understood once, that She WAS once, that is now lost in a mere endless loop. She must reclaim Perfection. (I PARTICULARLY identify with She-Who-Lives-In-Her-Name when trying to debug my own code.)

  2. Malfeas - although primarily through Lieger, the burning soul of Malfeas, who still remembers The Empyrean Presence / IAM / Malfeas-that-was. I especially empathize with the sense of "My greater self is broken and seething with mindless rage, but on the whole I'd rather be creating grand works of art and sharing them with adoring fans; the best I can do is spawn lesser shards of sub-consciousness and hope that one of them can find a way out of the mess I create and re-create for Myself."

  3. Cecelyne, the Endless De

... (read more)

I award you +1 Genuine Weirdness point.

3Strange7
Everything we know about the Primordials was written by mortals.
1Will_Newsome
FWIW I always figured you being a Green Sun Prince under She Who Lives In Her Name would explain some otherwise strange things.
0David_Gerard
Makes for good Worm crossover fics, though.
7Nornagest
Those aren't bad. I'd been rather fond of the World of Darkness 2E version (by the same company), which medievalists, recovering Catholics, and history-of-philosophy geeks might recognize as the seven Christian virtues altered slightly to be less religion-bound; but these look better-defined and with less overlap. There do some to be some lacunae, though. I don't think justice fits well under compassion, nor conscientiousness under conviction (I'd put that under temperance); and nothing quite seems to cover the traditional virtue of prudence (foresight; practical judgment; second thoughts). I'll have to think about less traditional ones.
3Eugine_Nier
Thinking about this people making this mistake explains a lot of bad thinking these days. In particular, "social justice" looks a lot like what you get by trying to shoehorn justice under compassion.
1Eugine_Nier
Well, with your modifications these map pretty clearly to six of the seven Christian virtues, the missing one being Hope.
3Nornagest
An earlier version of my comment went into more depth on the seven Christian virtues. I rejected it because I didn't feel the mapping was all that good. Courage/valor is traditionally identified with the classical virtue of fortitude, but I feel the emphasis there is actually quite different; fortitude is about acceptance of pain in the service of some greater goal, while Ialdabaoth's valor is more about facing up to anxiety/doubt/possible future pain. In particular, I don't think Openness maps very well at all to fortitude. Likewise, the theological virtue of faith maps pretty well to conviction if you stop at that word, but not once you put the emphasis on resolve/grit/heroic effort. Prudence could probably be inserted unmodified (though I think it could be named more clearly). Justice is a tricky one; I'm not sure what I'd do with it.
5Lumifer
On the basis of what do you want to evaluate virtues? X-D
0[anonymous]
I did that for a while and it kind-of worked, then I throwed the piece of paper away for some reason I can't remember, I regret that, and I still haven't got around to doing it again but I hope I do soon.

+1! I too am skeptical about whether I or most of the people I know really have terminal goals (or, even if they really have them, whether they're right about what they are). One of the many virtues (!) of a virtue ethics-based approach is that you can cultivate "convergent instrumental virtues" even in the face of a lot of uncertainty about what you'll end up doing, if anything, with them.

4Gavin
I'm pretty confident that I have a strong terminal goal of "have the physiological experience of eating delicious barbecue." I have it in both near and far mode, and remains even when it is disadvantageous in many other ways. Furthermore, I have it much more strongly than anyone I know personally, so it's unlikely to be a function of peer pressure. That said, my longer term goals seem to be a web of both terminal and instrumental values. Many things are terminal goals as well as having instrumental value. Sex is a good in itself but also feeds needs other big picture psychological and social needs.
1Qiaochu_Yuan
Hmm. I guess I would describe that as more of an urge than as a terminal goal. (I think "terminal goal" is supposed to activate a certain concept of deliberate and goal-directed behavior and what I'm mostly skeptical of is whether that concept is an accurate model of human preferences.) Do you, for example, make long-term plans based on calculations about which of various life options will cause you to eat the most delicious barbecue?
3Gavin
It's hard to judge just how important it is, because I have fairly regular access to it. However, food options definitely figure into long term plans. For instance, the number of good food options around my office are a small but very real benefit that helps keep me in my current job. Similarly, while plenty of things can trump food, I would see the lack of quality food to be a major downside to volunteering to live in the first colony on Mars. Which doesn't mean it would be decisive, of course. I will suppress urges to eat in order to have the optimal experience at a good meal. I like to build up a real amount of hunger before I eat, as I find that a more pleasant experience than grazing frequently. I try to respect the hedonist inside me, without allowing him to be in control. But I think I'm starting to lean pro-wireheading, so feel free to discount me on that account.
-2TheAncientGeek
So who would you kill if they stood between you and a good barbecue? ( it's almost like you guys haven't thought about what terminal means)
8nshepperd
It's almost like you haven't read the multiple comments explaining what "terminal" means. It simply means "not instrumental". It has nothing to do with the degree of importance assigned relative to other goals, except in that, obviously, instrumental goals deriving from terminal goal X are always less important than X itself. If your utility function is U = A + B then A and B can be sensibly described as terminal, and the fact that A is terminal does not mean you'd destroy all B just to have A. Yes, "terminal" means final. Terminal goals are final in that your interest in them derives not from any argument but from axiom (ie. built-in behaviours). This doesn't mean you can't have more than one.
2TheAncientGeek
Ok,well your first link is to Lumifers account of TGs as cognitivelyly inaccessible, since rescinded.
2Nornagest
What? It doesn't say any such thing. It says they're inexplicable in terms of the goal system being examined, but that doesn't mean they're inaccessible, in the same way that you can access the parallel postulate within Euclidian geometry but can't justify it in terms of the other Euclidian axioms. That said, I think we're probably good enough at rationalization that inexplicability isn't a particularly good way to model terminal goals for human purposes, insofar as humans have well-defined terminal goals.
0Lumifer
Sorry, what is that "rescinded" part?
1TheAncientGeek
"It has nothing to do with comprehensibility"
2DefectiveAlgorithm
Consider an agent trying to maximize its Pacman score. 'Getting a high Pacman score' is a terminal goal for this agent - it doesn't want a high score because that would make it easier for it to get something else, it simply wants a high score. On the other hand, 'eating fruit' is an instrumental goal for this agent - it only wants to eat fruit because that increases its expected score, and if eating fruit didn't increase its expected score then it wouldn't care about eating fruit. That is the only difference between the two types of goals. Knowing that one of an agent's goals is instrumental and another terminal doesn't tell you which goal the agent values more.
1Lumifer
Since you seem to be purposefully unwilling to understand my posts, could you please refrain from declaring that I have "rescinded" my opinions on the matter?
-4TheAncientGeek
So you have a thing which is like an axiom in that it can't be explained in more basic terms... ..but is unlike an axiom in that you can ignore its implications where they don't suit.. you don't have to savage galaxies to obtain bacon... ..unless you're an AI and it's paperclips instead of bacon, because in that case these axiom like things actually are axiom like.
2Nornagest
Terminal values can be seen as value axioms in that they're the root nodes in a graph of values, just as logical axioms can be seen as the root nodes of a graph of theorems. They are unlike logical axioms in that we're using them to derive the utility consequent on certain choices (given consequentialist assumptions; it's possible to have analogs of terminal values in non-consequentialist ethical systems, but it's somewhat more complicated) rather than the boolean validity of a theorem. Different terminal values may have different consequential effects, and they may conflict without contradiction. This does not make them any less terminal. Clippy has only one terminal value which doesn't take into account the integrity of anything that isn't a paperclip, which is why it's perfectly happy to convert the mass of galaxies into said paperclips. Humans' values are more complicated, insofar as they're well modeled by this concept, and involve things like "life" and "natural beauty" (I take no position on whether these are terminal or instrumental values w.r.t. humans), which is why they generally aren't.
0TheAncientGeek
Locally, human values usually are modelled by TGs. What's conflict without contradiction?
1Nornagest
You can define several ethical models in terms of their preferred terminal value or set of terminal values; for negative utilitarianism, for example, it's minimization of suffering. I see human value structure as an unsolved problem, though, for reasons I don't want to spend a lot of time getting into this far down in the comment tree. Or did you mean "locally" as in "on Less Wrong"? I believe the term's often misused here, but not for the reasons you seem to. Because of the structure of Boolean logic, logical axioms that come into conflict generate a contradiction and therefore imply that the axiomatic system they're embedded in is invalid. Consequentialist value systems don't have that feature, and the terminal values they flow from are therefore allowed to conflict in certain situations, if more than one exists. Naturally, if two conflicting terminal values both have well-behaved effects over exactly the same set of situations, they might as well be reduced to one, but that isn't always going to be the case.
2DefectiveAlgorithm
If acquiring bacon was your ONLY terminal goal, then yes, it would be irrational not to do absolutely everything you could to maximize your expected bacon. However, most people have more than just one terminal goal. You seem to be using 'terminal goal' to mean 'a goal more important than any other'. Trouble is, no one else is using it this way. EDIT: Actually, it seems to me that you're using 'terminal goal' to mean something analogous to a terminal node in a tree search (if you can reach that node, you're done). No one else is using it that way either.
-4TheAncientGeek
Feel free to offer the correc definition. But note that you came define it as overridable, since non terminal goals are already defined that way. There is no evidence that people have one or more terminal goals . At least you need to offer a definition such that multiple TGs don't collide, and are distinguishable from non TGs.
0Nornagest
Where are you getting these requirements from?
-8TheAncientGeek
6gjm
It looks to me (am I misunderstanding?) as if you take "X is a terminal goal" to mean "X is of higher priority than anything else". That isn't how I use the term, and isn't how I think most people here use it. I take "X is a terminal goal" to mean "X is something I value for its own sake and not merely because of other things it leads to". Something can be a terminal goal but not a very important one. And something can be a non-terminal goal but very important because the terminal goals it leads to are of high priority. So it seems perfectly possible for eating barbecue to be a terminal goal even if one would not generally kill to achieve it. [EDITED to add the following.] On looking at the rest of this thread, I see that others have pointed this out to you and you've responded in ways I find baffling. One possibility is that there's a misunderstanding on one or other side that might be helped by being more explicit, so I'll try that. The following is of course an idealized thought experiment; it is not intended to be very realistic, merely to illustrate the distinction between "terminal" and "important". Consider someone who, at bottom, cares about two things (and no others). (1) She cares a lot about people (herself or others) not experiencing extreme physical or mental anguish. (2) She likes eating bacon. These are (in my terminology, and I think that of most people here) her "terminal values". It happens that #1 is much more important to her than #2. This doesn't (in my terminology, and I think that of most people here) make #2 any less terminal; just less important. She has found that simply attending to these two things and nothing else is not very effective in minimizing anguish and maximizing bacon. For instance, she's found that a diet of lots of bacon and nothing else tends to result in intestinal anguish, and what she's read leads her to think that it's also likely to result in heart attacks (which are very painful, and sometimes lead to death, whi
-6TheAncientGeek
3pinyaka
I don't think that terminal goal means that it's the highest priority here, just that there is no particular reason to achieve it other than the experience of attaining that goal. So eating barbecue isn't about nutrition or socializing, it's just about eating barbecue.
2scaphandre
I think the 'terminal' in terminal goal means 'end of that thread of goals', as in a train terminus. Something that is wanted for the sake of itself. It does not imply that you will terminate someone to achieve it.
-1TheAncientGeek
If g1 is you bacon eating goal, ,and g2 is your not killing people goal, and g2 overrides g1, then g2 is the end of the thread.
4Swimmer963 (Miranda Dixon-Luinenburg)
I'm not sure I'm prepared to make the stronger claim that I don't believe other people have terminal goals. Maybe they do. They know more about their brains than I do. I'm definitely willing to make the claim that people trying to help me rewrite my brain is not going to prove to be useful.
0TheAncientGeek
There is no evidence that most or all people have terminal goals. TV's should not be assumed by default or used as a theoretical framework.
1Lumifer
Survival is a terminal goal that most people have.
4Jayson_Virissimo
Is it though, or do people want to survive in order to achieve other goals? Many people (I think) wouldn't want to continue living if they were in a vegetative state with ultra-low probability of regaining their ability to live normally (and therefore, achieve other goals).
1Lumifer
I am pretty sure people have a biologically hardwired desire to survive. It is terminal X-D Yes, but do note the difference between "I survive" and "my brain-dead body survives".
1TheAncientGeek
If someone is persuaded to sacrifice themsself for a cause X, is cause X then more-than-terminal?
5TheAncientGeek
I suppose you you could say that, survival was never their terminal goal. But, to me that has a just so quality. You can identify a terminal goal from any life history, but you can't predict anything.
4Lumifer
Humans have multiple values, including multiple terminal values. They do not necessary form any coherent system and so on a regular basis conflict with one another. This is a normal state of being for human values. Conflicts get resolved in a variety of ways, sometimes by cost-benefit analysis, and sometimes by hormonal imbalance :-)
1TheAncientGeek
If there is no coherence or stability in the human value system, then there are no terminal values, in any sense that makes a meaningful distinction. Anarchies don't have leaders either.
9Lumifer
"Terminal" does NOT mean "the most important". It means values which you cannot (internally) explain in terms of other values, you have them just because you have them. They are axioms.
-6TheAncientGeek
-4Richard_Kennaway
That explains why there are no such things as armies or wars, why no-one has ever risked their life for another, why no-one has ever chosen dying well above living badly, and why no-one has ever considered praiseworthy the non-existent people who have done these things. No-one would dream of engaging in dangerous sports, nor has the saying "live fast, die young" ever meant anything but a condemnation.
8Lumifer
To repeat myself, terminal goals do not have to be important, it's a different quality. For me, for example, the feeling of sun on my skin is a terminal value. It's not a very important terminal value :-)
4A1987dM
That only shows that survival isn't the only terminal goal, not necessarily that it's not a terminal goal at all.
-7TheAncientGeek
[-]kalium110

My brain works this way as well. Except with the addition that nearly all sorts of consequentialism are only able to motivate me through guilt, so if I try to adopt such an ethical system I feel terrible all the time because I'm always falling far short of what I should be doing. With virtue ethics, on the other hand, I can feel good about small improvements and perhaps even stay motivated until they combine into something less small.

I want to know true things about myself. I also want to impress my friends by having the traits that they think are cool, but not at the price of faking it–my brain screams that pretending to be something other than what you are isn’t virtuous.

I'm like this. Part of what makes it difficult is figuring out whether you're "faking it" or not. One of the maybe-not-entirely-pleasant side effects of reading Less Wrong is that I've become aware of many of the ways that my brain will lie to me about what I am and the many ways it will attempt to signa... (read more)

Part of what makes it difficult is figuring out whether you're "faking it" or not.

Speaking of movies, I love Three Kings for this:

Archie Gates: You're scared, right?

Conrad Vig: Maybe.

Archie Gates: The way it works is, you do the thing you're scared shitless of, and you get the courage AFTER you do it, not before you do it.

Conrad Vig: That's a dumbass way to work. It should be the other way around.

Archie Gates: I know. That's the way it works.

2SilentCal
The distinction between pretending and being can get pretty fuzzy. I like the 'pretend to pretend to actually try' approach where you try to stop yourself from sending cheap/poor signals rather than false ones. That is, if you send a signal that you care about someone, and the 'signal' is something costly to you and helpful to the other person, it's sort of a moot point whether you 'really care'.
4Kaj_Sotala
I think that in the context of caring at least, the pretending/being distinction is a way of classifying the components motivating your behavior. If you're "faking" caring, then that implies that you need to actively spend effort on caring. Compared to a situation where the caring "came naturally" and didn't require effort, the "faker" should be expected to act in a non-caring manner more frequently, because situations that leave him cognitively tired are more likely to mean that he can't spare the effort to go on with the caring behavior. Also, having empathic caring for other people is perceived as being a pretty robust trait in general: if you have it, it's basically "self-sustaining" and doesn't ordinarily just disappear. On the other hand, goals like "I want to fake caring" are more typically subgoals to some other goal, which may disappear. If you know that someone is faking caring, then there are more potential situations where they might stop doing that - especially if you don't know why they are faking it.
1SilentCal
Wow, you can care about other people in a way that doesn't even begin to degrade under cognitive fatigue? Is that common? I like defining 'real' caring as stable/robust caring, though. If I 'care' about my friends because I want caring about friends to be part of my identity, I consider that 'real' caring, since it's about as good as I get.
-1[anonymous]
So fix it. Learn more, think more, do more, be more. Humility doesn't save worlds, and you can't really believe in your own worthlessness. Instead, believe in becoming the person whom your brain believes you to be.
2Error
Clarification: I don't believe I'm worthless. But there's still frequently a disparity between the worth I catch myself trying to signal and the worth I (think I) actually have. Having worth > 0 doesn't make that less objectionable. I do tend to give up on the "becoming" part as often as not, but I don't think I do worse than average in that regard. Average does suck, though.
9[anonymous]
Why are you still making excuses not to be awesome?
1[anonymous]
Pity we can't self quote in the quote thread.
1[anonymous]
Huh? You've said something you want to quote? But this isn't the quotes thread...
5Sabiola
paper-machine wants to quote you, eli. "Why are you still making excuses not to be awesome?" would have made a pretty good quote, if only you hadn't written it on Less Wrong.
1[anonymous]
Well that's nice of him.

The obvious things to do here is either:

a) Make a list/plan on paper, abstractly, of what you WOULD do is you had terminal goals, using your existing virtues to motive this act, and then have "Do what the list tells me to" as a loyalty-like high priority virtue. If you have another rationalist you really trust, and who have a very strong honesty commitment, you can even outsource the making of this list.

b) Assemble virtues that sum up to the same behaviors in practice; truth seeking, goodness, and "If something is worth doing it's worth doing optimally" is a good trio, and will have the end result of effective altruism while still running on the native system.

Ever notice sci-fi/fantasy books written by young people have not just little humor, but absolutely zero humor (eg, Divergent, Eragon)?

1mare-of-night
I haven't noticed it in my reading, but I'm probably just not well-read enough. But I'm pretty sure the (longform story, fantasy genre) webcomic script I wrote at 17 was humorless, or nearly humorless. I was even aware of this at the time, but didn't try very hard to do anything about it. I think I had trouble mixing humor and non-humor at that age. I'm trying to think back on whether other writers my own age had the same problem, but I can't remember, except that stories we wrote together (usually by taking turns writing a paragraph or three at a time in a chatroom) usually did mix humor with serious-tone fantasy. This makes me wonder if being used to writing for an audience has something to do with it. The immediate feedback of working together that way made me feel a lot of incentive to write things that were entertaining.
1Swimmer963 (Miranda Dixon-Luinenburg)
I actually haven't read either Divergent or Eragon. I've been told that the fantasy book I wrote recently is funny, and I'm pretty sure I qualify as "young person."
1Nornagest
Eragon was written by a teenager with publishing connections. I don't know the story behind Divergent as well, but Wikipedia informs me that it was written while its author was in her senior year of college. It's not so uncommon for writing, especially a first novel, to be published in its author's twenties -- Poe published several stories at that age, for example -- but teenage authors are a lot more unusual. (I can't speak to their humor or lack thereof either, though -- my tastes in SF run a little more pretentious these days.)

the conversations I've had over the past two years, where other rationalists have asked me "so what are your terminal goals/values?" and I've stammered something and then gone to hide in a corner and try to come up with some.

Like many commenters here, I don't think we have very good introspective access to our own terminal values, and what we think are terminal values may be wrong. So "what are your terminal values" doesn't seem like a very useful question (except in that it may take the conversation somewhere interesting, but I don... (read more)

1CCC
A terminal value could be defined as that for which I would be prepared to knowingly enter a situation that carries a strong risk of death or other major loss. Working off that definition, it is clear that other people knowing what my terminal goals are is dangerous - if an enemy finds out that information, then he can threaten my terminal goal to force me to abandon a valuable but non-terminal resource. (It's risky on the enemy's part, because it leaves open the option that I might preserve my terminal goals by killing or imprisoning the enemy in question; either way, though, I still lose significantly in the process.) And if I don't have good introspective access to my own terminal goals, then it is harder for a potential enemy to find out what they are. Moreover, this would also have applied to my ancestors. So not having good introspective access to my own terminal goals may be a general human survival adaptation.
1Emile
That seems more like a definition of something one cares a lot about; sure, the two are correlated, but I believe "terminal value" usually refers to something you care about "for itself" rather than because it helps you in another way. So you could care more about an instrumental value (e.g. making money) than about a value-you-care-about-for-itself (e.g. smelling nice flowers). Both attributes (how much you care, and whether it's instrumental) are important though. Eh, I'm not sure; I could come up with equally plausible explanations for why it would be good to have introspective access to my terminal goals. And more importantly, humans (including everybody who could blackmail you) has roughly similar terminal goals, so have a pretty good idea of how you may react to different kinds of threats.
2CCC
Hmmm. Then it seems that I had completely misunderstood the term. My apologies. If that is the case, then it should be possible to find a terminal value by starting with any value and then repeatedly asking the question "and why do I value that value?" until a terminal value is reached. For example, I may care about money because it allows me to buy food; I may care about food because it allows me to stay alive; and staying alive might be a terminal value.
[-][anonymous]00

Can someone please react to my gut reaction about virtue ethics? I'd love some feedback if I misunderstand something.

It seems to me that most virtues are just instrumental values that make life convenient for people, especially those with unclear or intimidating terminal values.

The author says this about protagonist Tris:

Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’

I think maybe the deeper 'become good' node (and its huge overlap w... (read more)

[This comment is no longer endorsed by its author]Reply
[-][anonymous]-30

You don't know your terminal goals in detail, whatever that should be. Instead, there is merely a powerful technique of working towards goals, which are not terminal goals, but guesses at what's valuable, compared to alternative goals, alternative plans and outcomes implicit in them. Choosing among goals allows achieving more difficult outcomes than merely choosing among simpler actions where you can see the whole plan before it starts (you can't plan a whole chess game in advance, can't win it by an explicit plan that enumerates the moves, but you can win... (read more)

[This comment is no longer endorsed by its author]Reply