Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Promoted Posts and the Metaethics sequence now available in audio

16 Rick_from_Castify 03 June 2014 02:00AM

We are proud to announce audio versions of the Less Wrong Promoted Posts and the Metaethics major sequence, both now available via a Castify Podcast.

The Less Wrong Promoted Posts feed will have every new promoted post which has been tagged with the Creative Commons Attribution License.  We'll aim to have them read and to you via the podcast within 48 hours.  We've found this to be a good way to keep up with Less Wrong, especially for longer articles like last month's interesting long-form post called "A Dialouge on Doublethink" by BrienneStrohl.

The Metaethics Sequence is the next installment of the sequences we've produced into audio.  We now have 7 Less Wrong sequences in audio, with more on their way. 

As always we appreciate your support and your feedback: support@castify.co.

 

Links:

Promoted Posts Subscription: http://castify.co/channels/51-less-wrong

Metaethics sequence: http://castify.co/channels/50-metaethics

Channels page: http://castify.co/channels

 

Morality is Awesome

86 [deleted] 06 January 2013 03:21PM

(This is a semi-serious introduction to the metaethics sequence. You may find it useful, but don't take it too seriously.)

Meditate on this: A wizard has turned you into a whale. Is this awesome?

Is it?

"Maybe? I guess it would be pretty cool to be a whale for a day. But only if I can turn back, and if I stay human inside and so on. Also, that's not a whale.

"Actually, a whale seems kind of specific, and I'd be suprised if that was the best thing the wizard can do. Can I have something else? Eternal happiness maybe?"

Meditate on this: A wizard has turned you into orgasmium, doomed to spend the rest of eternity experiencing pure happiness. Is this awesome?

...

"Kindof... That's pretty lame actually. On second thought I'd rather be the whale; at least that way I could explore the ocean for a while.

"Let's try again. Wizard: maximize awesomeness."

Meditate on this: A wizard has turned himself into a superintelligent god, and is squeezing as much awesomeness out of the universe as it could possibly support. This may include whales and starships and parties and jupiter brains and friendship, but only if they are awesome enough. Is this awesome?

...

"Well, yes, that is awesome."


What we just did there is called Applied Ethics. Applied ethics is about what is awesome and what is not. Parties with all your friends inside superintelligent starship-whales are awesome. ~666 children dying of hunger every hour is not.

(There is also normative ethics, which is about how to decide if something is awesome, and metaethics, which is about something or other that I can't quite figure out. I'll tell you right now that those terms are not on the exam.)

"Wait a minute!" you cry, "What is this awesomeness stuff? I thought ethics was about what is good and right."

I'm glad you asked. I think "awesomeness" is what we should be talking about when we talk about morality. Why do I think this?

  1. "Awesome" is not a philosophical landmine. If someone encounters the word "right", all sorts of bad philosophy and connotations send them spinning off into the void. "Awesome", on the other hand, has no philosophical respectability, hence no philosophical baggage.

  2. "Awesome" is vague enough to capture all your moral intuition by the well-known mechanisms behind fake utility functions, and meaningless enough that this is no problem. If you think "happiness" is the stuff, you might get confused and try to maximize actual happiness. If you think awesomeness is the stuff, it is much harder to screw it up.

  3. If you do manage to actually implement "awesomeness" as a maximization criteria, the results will be actually good. That is, "awesome" already refers to the same things "good" is supposed to refer to.

  4. "Awesome" does not refer to anything else. You think you can just redefine words, but you can't, and this causes all sorts of trouble for people who overload "happiness", "utility", etc.

  5. You already know that you know how to compute "Awesomeness", and it doesn't feel like it has a mysterious essence that you need to study to discover. Instead it brings to mind concrete things like starship-whale math-parties and not-starving children, which is what we want anyways. You are already enabled to take joy in the merely awesome.

  6. "Awesome" is implicitly consequentialist. "Is this awesome?" engages you to think of the value of a possible world, as opposed to "Is this right?" which engages to to think of virtues and rules. (Those things can be awesome sometimes, though.)

I find that the above is true about me, and is nearly all I need to know about morality. It handily inoculates against the usual confusions, and sets me in the right direction to make my life and the world more awesome. It may work for you too.

I would append the additional facts that if you wrote it out, the dynamic procedure to compute awesomeness would be hellishly complex, and that right now, it is only implicitly encoded in human brains, and no where else. Also, if the great procedure to compute awesomeness is not preserved, the future will not be awesome. Period.

Also, it's important to note that what you think of as awesome can be changed by considering things from different angles and being exposed to different arguments. That is, the procedure to compute awesomeness is dynamic and created already in motion.

If we still insist on being confused, or if we're just curious, or if we need to actually build a wizard to turn the universe into an awesome place (though we can leave that to the experts), then we can see the metaethics sequence for the full argument, details, and finer points. I think the best post (and the one to read if only one) is joy in the merely good.

By Which It May Be Judged

35 Eliezer_Yudkowsky 10 December 2012 04:26AM

Followup toMixed Reference: The Great Reductionist Project

Humans need fantasy to be human.

"Tooth fairies? Hogfathers? Little—"

Yes. As practice. You have to start out learning to believe the little lies.

"So we can believe the big ones?"

Yes. Justice. Mercy. Duty. That sort of thing.

"They're not the same at all!"

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

- Susan and Death, in Hogfather by Terry Pratchett

Suppose three people find a pie - that is, three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory. Zaire wants the entire pie; Yancy thinks that 1/3 each is fair; and Xannon thinks that fair would be taking into equal account everyone's ideas about what is "fair".

continue reading »

Against Utilitarianism: Sobel's attack on judging lives' goodness

13 gwern 31 January 2012 05:45AM

Luke tasked me with researching the following question

I‘d like to know if anybody has come up with a good response to any of the objections to ’full information’ or ‘ideal preference’ theories of value given in Sobel (1994). (My impression is “no.”)

The paper in question is David Sobel’s 1994 paper “Full Information Accounts of Well-Being” (Ethics 104, no. 4: 784–810) (his 1999 paper, “Do the desires of rational agents converge?”, is directed against a different kind of convergence and won’t be discussed here).

The starting point is Brandt’s 1979 book where he describes his version of a utilitarianism in which utility is the degree of satisfaction of the desires of one’s ideal ‘fully informed’ self, and Sobel also refers to the 1986 Railton apologetic. (LWers will note that this kind of utilitarianism sounds very similar to CEV and hence, any criticism of the former may be a valid criticism of the latter.) I’ll steal entirely the opening to Mark C Murphy’s 1999 paper, “The Simple Desire-Fulfillment Theory” (rejecting any hypotheticals or counterfactuals in desire utilitarianism), since he covers all the bases (for even broader background, see the Tanner Lecture “The Status of Well-Being”):

continue reading »

Terminal Bias

18 [deleted] 30 January 2012 09:03PM

I've seen of people on Lesswrong taking cognitive structures that I consider to be biases as terminal values. Take risk aversion for example:

Risk Aversion

For a rational agent with goals that don't include "being averse to risk", risk aversion is a bias. The correct decision theory acts on expected utility, with utility of outcomes and probability of outcomes factored apart and calculated separately. Risk aversion does not factor them.

EDIT: There is some contention on this. Just substitute "that thing minimax algorithms do" for "risk aversion" in my writing. /EDIT

A while ago, I was working through the derivation of A* and minimax planning algorithms from a Bayesian and decision-theoretic base. When I was trying to understand the relationship between them, I realized that strong risk aversion, aka minimax, saves huge amounts of computation compared to the correct decision theory, and actually becomes more optimal as the environment becomes more influenced by rational opponents. The best way win is to deny the opponents any opportunity to weaken you. That's why minimax is a good algorithm for chess.

Current theories about the origin of our intelligence say that we became smart to outsmart our opponents in complex social games. If our intelligence was built for adversarial games, I am not surprised at risk aversion.

A better theoretical replacement, and a plausible causal history for why we have the bias instead of the correct algorithm are convincing to me as an argument against risk aversion as a value the way a rectangular 13x7 pebble heap is convincing to a pebble sorter as an argument against the correctness of a heap of 91 pebbles; it seems undeniable, but I don't have access to the hidden values that would say for sure.

And yet I've seen people on LW state that their "utility function" includes risk aversion. Because I don't understand the values involved, all I can do is state the argument above and see if other people are as convinced as me.

It may seem silly to take a bias as terminal, but there are examples with similar arguments that are less clear-cut, and some that we take as uncontroversially terminal:

Responsibility and Identity

The feeling that you are responsible for some things and not others, like say, the safety of your family, but not people being tortured in Syria, seems noble and practical. But I take it to be a bias.

I'm no evolutionary psychologist, but it seems to me that feelings of responsibility are a quick hack to kick you into motion where you can affect the outcome and the utility at stake is large. For the most part, this aligns well with utilitarianism; you usually don't feel responsible for things you can't really affect, like people being tortured in Syria, or the color of the sky. You do feel responsible to pull a passed out kid off the train tracks, but maybe you don't feel responsible to give them some fashion advice.

Responsibility seems to be built on identity, so it starts to go weird when you identify or don't identify in ways that didn't happen in the ancestral environment. Maybe you identify as a citizen of the USA, but not of Syria, so you feel shame and responsibility about the US torturing people, but the people being tortured in Syria are not your responsibility, even though both cases are terrible, and there is very little you can do about either. A proper utilitarian would feel approximately the same desire to do something about each, but our responsibility hack emphasizes responsibility for the actions of the tribe you identify with.

You might feel great responsibility to defend your past actions but not those other people, even tho neither is worth "defending". A rational agent would learn from both the actions of their own past selves and those of other people without seeking to justify or condemn; they would update and move on. There is no tribal council that will exile you if you change your tune or don't defend yourself.

You might be appalled that someone wishes to stop feeling responsibility for their past selves; "but if they don't feel responsibility for their actions, what will prevent them from murdering people, or encourage them to do good?". A rational utilitarian would do good and not do evil because they wish good and non-evil to be done, instead of because of feelings of responsibility that they don't understand.

This argument is a little harder to see and possibly a little less convincing, but again I am convinced that identity and responsibility are inferior to utilitarianism, tho they may have seemed almost terminal.

Justice

Surely justice is a terminal value; it feels so noble to desire it. Again I consider the desire for justice to be a biased heuristic.

in game theory the best solution for iterated prisoners dilemma is tit-for-tat: cooperate and be nice, but punish defectors. Tit-for-tat looks a lot like our instincts for justice, and I've heard that the prisoners dilemma is a simplified analog of many of the situations that came up in the ancestral environment, so I am not surprised that we have an instinct for it.

It's nice that we have a hardware implementation of tit-for-tat, but to the extent that we take it as terminal instead of instrumental-in-some-cases, it will make mistakes. It will work well when individuals might choose to defect from the group for greater personal gain, but what if we discover, for example, that some murders are not calculated defections, but failures of self control caused by a bad upbringing and lack of education. What if we then further discover that there is a two-month training course that has a high success rate of turning murderers into productive members of society. When Dan the Deadbeat kills his girlfriend, and the psychologists tell us he is a candidate for the rehab program, we can demand justice like we feel we ought to at a cost of hundreds of thousands of dollars and a good chunk of Dan's life, or we can run Dan thru the two month training course for a few thousand dollars, transforming him into a good, normal person. People who take punishment of criminals as a terminal value will choose prison for Dan, but people with other interests would say rehab.

One problem with this story is that the two-month murder rehab seems wildly impossible, but so do all of Omega's tricks. I think it's good to stress our theories at the limits, they seem to come out stronger, even for normal cases.

I was feeling skeptical about some people's approach to justice theory when I came up with this one, so I was open to changing my understanding of justice. I am now convinced that justice and punishment instincts are instrumental, and only approximations of the correct game theory and utilitarianism. The problem is, while I was convinced, someone who takes justice as terminal, and is not open to the idea that it might be wrong, is absolutely not convinced. They will say "I don't care if it is more expensive, or that you have come up with something that 'works better', it is our responsibility to the criminal to punish them for their misdeeds.". Part of the reason for this post is that I don't know what to say to this. All I can do is state the argument that convinced me, ask if they have something to protect, and feel like I'm arguing with a rock.

Before anyone who is still with me gets enthusiastic about the idea that knowing a causal history and an instrumentally better way is enough to turn a value into a bias, consider the following:

Love, Friendship, and Flowers

See the gift we give to tomorrow. That post contains plausible histories for why we ended up with nice things like love, friendship, and beauty; and hints that could lead you to 'better' replacements made out of game theory and decision theory.

Unlike the other examples, where I felt a great "Aha!" and decided to use the superior replacements when appropriate, this time I feel scared. I thought I had it all locked out, but I've found some existential angst lurking in the basement.

Love and such seem like something to protect, like I don't care if there are better solutions to the problem they were built to solve; I don't care if game theory and decision theory leads to more optimal replication. If I'm worried that love will go away, then there's no reason I ought to let it, but these are the same arguments as the people who think justice is terminal. What is the difference that makes it right this time?

Worrying and Conclusion

One answer to this riddle is that everyone is right with respect to themselves, and there's nothing we can do about disagreements. There's nothing someone who has one interpretation can say to another to justify their values against some objective standard. By the full power of my current understanding, I'm right, but so is someone who disagrees.

On the other hand, maybe we can do some big million-variable optimization on the contradictory values and heuristics that make up ourselves and come to a reflectively coherent understanding of which are values and which are biases. Maybe none of them have to be biases; it makes sense and seems acceptable that sometimes we will have to go against one of our values for greater gain in another. Maybe I'm asking the wrong question.

I'm confused, what does LW think?

Solution

I was confused about this for a while; is it just something that we have to (Gasp!) agree to disagree about? Do we have to do a big analysis to decide once and for all which are "biases" and which are "values"? My favored solution is to dissolve the distinction between biases and values:

All our neat little mechanisms and heuristics make up our values, but they come on a continuum of importance, and some of them sabotage the rest more than others.

For example, all those nice things like love and beauty seem very important, and usually don't conflict, so they are closer to values.

Things like risk aversion and hindsight bias and such aren't terribly important, but because they prescribe otherwise stupid behavior in the decision theory/epistemological realm, they sabotage the achievement of other bias/values, and are therefore a net negative.

This can work for the high-value things like love and beauty and freedom as well: Say you are designing a machine that will achieve many of your values, being biased towards making it beautiful over functional could sabotage achievement of other values. Being biased against having powerful agents interfering with freedom can prevent you from accepting law or safety.

So debiasing is knowing how and when to override less important "values" for the sake of more important ones, like overriding your aversion to cold calculation to maximize lives saved in a shut up and multiply situation.

The Human's Hidden Utility Function (Maybe)

44 lukeprog 23 January 2012 07:39PM

Suppose it turned out that humans violate the axioms of VNM rationality (and therefore don't act like they have utility functions) because there are three valuation systems in the brain that make conflicting valuations, and all three systems contribute to choice. And suppose that upon reflection we would clearly reject the outputs of two of these systems, whereas the third system looks something more like a utility function we might be able to use in CEV.

What I just described is part of the leading theory of choice in the human brain.

Recall that human choices are made when certain populations of neurons encode expected subjective value (in their firing rates) for each option in the choice set, with the final choice being made by an argmax or reservation price mechanism.

Today's news is that our best current theory of human choices says that at least three different systems compute "values" that are then fed into the final choice circuit:

  • The model-based system "uses experience in the environment to learn a model of the transition distribution, outcomes and motivationally-sensitive utilities." (See Sutton & Barto 1998 for the meanings of these terms in reinforcement learning theory.) The model-based system also "infers choices by... building and evaluating the search decision tree to work out the optimal course of action." In short, the model-based system is responsible for goal-directed behavior. However, making all choices with a goal-directed system using something like a utility function would be computationally prohibitive (Daw et al. 2005), so many animals (including humans) first evolved much simpler methods for calculating the subjective values of options (see below).

  • The model-free system also learns a model of the transition distribution and outcomes from experience, but "it does so by caching and then recalling the results of experience rather than building and searching the tree of possibilities. Thus, the model-free controller does not even represent the outcomes... that underlie the utilities, and is therefore not in any position to change the estimate of its values if the motivational state changes. Consider, for instance, the case that after a subject has been taught to press a lever to get some cheese, the cheese is poisoned, so it is no longer worth eating. The model-free system would learn the utility of pressing the lever, but would not have the informational wherewithal to realize that this utility had changed when the cheese had been poisoned. Thus it would continue to insist upon pressing the lever. This is an example of motivational insensitivity."

  • The Pavlovian system, in contrast, calculates values based on a set of hard-wired preparatory and consummatory "preferences." Rather than calculate value based on what is likely to lead to rewarding and punishing outcomes, the Pavlovian system calculates values consistent with automatic approach toward appetitive stimuli, and automatic withdrawal from aversive stimuli. Thus, "animals cannot help but approach (rather than run away from) a source of food, even if the experimenter has cruelly arranged things in a looking-glass world so that the approach appears to make the food recede, whereas retreating would make the food more accessible (Hershberger 1986)."

Or, as Jandila put it:

  • Model-based system: Figure out what's going on, and what actions maximize returns, and do them.
  • Model-free system: Do the thingy that worked before again!
  • Pavlovian system: Avoid the unpleasant thing and go to the pleasant thing. Repeat as necessary.

continue reading »

Not By Empathy Alone

19 gwern 05 October 2011 12:36AM

The following are extracts from the paper “Is Empathy Necessary For Morality?” (philpapers) by Jesse Prinz (WP) of CUNY; recently linked in a David Brooks New York Times column, “The Limits of Empathy”:

1 Introduction

Not only is there little evidence for the claim that empathy is necessary, there is also reason to think empathy can interfere with the ends of morality. A capacity for empathy might make us better people, but placing empathy at the center of our moral lives may be ill‐advised. That is not to say that morality shouldn’t centrally involve emotions. I think emotions are essential for moral judgment and moral motivation (Prinz, 2007)1. It’s just that empathetic emotions are not ideally suited for these jobs.

continue reading »

A Sketch of an Anti-Realist Metaethics

16 Jack 22 August 2011 05:32AM

Below is a sketch of a moral anti-realist position based on the map-territory distinction, Hume and studies of psychopaths. Hopefully it is productive.

The Map is Not the Territory Reviewed

Consider the founding metaphor of Less Wrong: the map-territory distinction. Beliefs are to reality as maps are to territory. As the wiki says:

Since our predictions don't always come true, we need different words to describe the thingy that generates our predictions and the thingy that generates our experimental results. The first thingy is called "belief", the second thingy "reality".

Of course the map is not the territory.

Here is Albert Einstein making much the same analogy:

Physical concepts are free creations of the human mind and are not, however it may seem, uniquely determined by the external world. In our endeavor to understand reality we are somewhat like a man trying to understand the mechanism of a closed watch. He sees the face and the moving hands, even hears its ticking, but he has no way of opening the case. If he is ingenious he may form some picture of a mechanism which could be responsible for all the things he observes, but he may never be quite sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and cannot even imagine the possibility or the meaning of such a comparison. But he certainly believes that, as his knowledge increases, his picture of reality will become simpler and simpler and will explain a wider and wider range of his sensuous impressions. He may also believe in the existence of the ideal limit of knowledge and that it is approached by the human mind. He may call this ideal limit the objective truth.

The above notions about beliefs involve pictorial analogs, but we can also imagine other ways the same information could be contained. If the ideal map is turned into a series of sentences we can define a 'fact' as any sentence in the ideal map (IM). The moral realist position can then be stated as follows:

Moral Realism: ∃x(x ⊂ IM) & (x = M)

In English: there is some set of sentences x such that all the sentences are part of the ideal map and x provides a complete account of morality.

Moral anti-realism simply negates the above.  ¬(∃x(x ⊂ IM) & (x = M)).

continue reading »

A Crash Course in the Neuroscience of Human Motivation

120 lukeprog 19 August 2011 09:15PM

[PDF of this article updated Aug. 23, 2011]

[skip to preface]

Whenever I write a new article for Less Wrong, I'm pulled in two opposite directions.

One force pulls me toward writing short, exciting posts with lots of brain candy and just one main point. Eliezer has done that kind of thing very well many times: see Making Beliefs Pay Rent, Hindsight Devalues Science, Probability is in the MindTaboo Your Words, Mind Projection FallacyGuessing the Teacher's Password, Hold Off on Proposing Solutions, Applause Lights, Dissolving the Question, and many more.

Another force pulls me toward writing long, factually dense posts that fill in as many of the pieces of a particular argument in one fell swoop as possible. This is largely because I want to write about the cutting edge of human knowledge but I keep realizing that the inferential gap is larger than I had anticipated, and I want to fill in that inferential gap quickly so I can get to the cutting edge.

For example, I had to draw on dozens of Eliezer's posts just to say I was heading toward my metaethics sequence. I've also published 21 new posts (many of them quite long and heavily researched) written specifically because I need to refer to them in my metaethics sequence.1 I tried to make these posts interesting and useful on their own, but my primary motivation for writing them was that I need them for my metaethics sequence.

And now I've written only four posts2 in my metaethics sequence and already the inferential gap to my next post in that sequence is huge again. :(

So I'd like to try an experiment. I won't do it often, but I want to try it at least once. Instead of writing 20 more short posts between now and the next post in my metaethics sequence, I'll attempt to fill in a big chunk of the inferential gap to my next metaethics post in one fell swoop by writing a long tutorial post (a la Eliezer's tutorials on Bayes' Theorem and technical explanation).3

So if you're not up for a 20-page tutorial on human motivation, this post isn't for you, but I hope you're glad I bothered to write it for the sake of others. If you are in the mood for a 20-page tutorial on human motivation, please proceed.

continue reading »

Pluralistic Moral Reductionism

33 lukeprog 01 June 2011 12:59AM

Part of the sequence: No-Nonsense Metaethics

Disputes over the definition of morality... are disputes over words which raise no really significant issues. [Of course,] lack of clarity about the meaning of words is an important source of error… My complaint is that what should be regarded as something to be got out of the way in the introduction to a work of moral philosophy has become the subject matter of almost the whole of moral philosophy...

Peter Singer

 

If a tree falls in the forest, and no one hears it, does it make a sound? If by 'sound' you mean 'acoustic vibrations in the air', the answer is 'Yes.' But if by 'sound' you mean an auditory experience in the brain, the answer is 'No.'

We might call this straightforward solution pluralistic sound reductionism. If people use the word 'sound' to mean different things, and people have different intuitions about the meaning of the word 'sound', then we needn't endlessly debate which definition is 'correct'.1 We can be pluralists about the meanings of 'sound'. 

To facilitate communication, we can taboo and reduce: we can replace the symbol with the substance and talk about facts and anticipations, not definitions. We can avoid using the word 'sound' and instead talk about 'acoustic vibrations' or 'auditory brain experiences.'

Still, some definitions can be wrong:

Alex: If a tree falls in the forest, and no one hears it, does it make a sound?

Austere MetaAcousticist: Tell me what you mean by 'sound', and I will tell you the answer.

Alex: By 'sound' I mean 'acoustic messenger fairies flying through the ether'.

Austere MetaAcousticist: There's no such thing. Now, if you had asked me about this other definition of 'sound'...

There are other ways for words to be wrong, too. But once we admit to multiple potentially useful reductions of 'sound', it is not hard to see how we could admit to multiple useful reductions of moral terms.

 

Many Moral Reductionisms

Moral terms are used in a greater variety of ways than sound terms are. There is little hope of arriving at the One True Theory of Morality by analyzing common usage or by triangulating from the platitudes of folk moral discourse. But we can use stipulation, and we can taboo and reduce. We can use pluralistic moral reductionism2 (for austere metaethics, not for empathic metaethics).

Example #1:

Neuroscientist Sam Harris: Which is better? Religious totalitarianism or the Northern European welfare state?

Austere Metaethicist: What do you mean by 'better'?

Harris: By 'better' I mean 'that which tends to maximize the well-being of conscious creatures'.

Austere Metaethicist: Assuming we have similar reductions of 'well-being' and 'conscious creatures' in mind, the evidence I know of suggests that the Northern European welfare state is more likely to maximize the well-being of conscious creatures than religious totalitarianism.

Example #2:

Philosopher Peter Railton: Is capitalism the best economic system?

Austere Metaethicist: What do you mean by 'best'?

Railton: By 'best' I mean 'would be approved of by an ideally instrumentally rational and fully informed agent considering the question ‘How best to maximize the amount of non-moral goodness?' from a social point of view in which the interests of all potentially affected individuals are counted equally.

Austere Metaethicist: Assuming we agree on the meaning of 'ideally instrumentally rational' and 'fully informed' and 'agent' and 'non-moral goodness' and a few other things, the evidence I know of suggests that capitalism would not be approved of by an ideally instrumentally rational and fully informed agent considering the question ‘How best to maximize the amount of non-moral goodness?' from a social point of view in which the interests of all potentially affected individuals were counted equally.

continue reading »

View more: Next