It's been claimed that increasing rationality increases effective altruism. I think that this is true, but the effect size is unclear to me, so it seems worth exploring how strong the evidence for it is. I've offered some general considerations below, followed by a description of my own experience. I'd very much welcome thoughts on the effect that rationality has had on your own altruistic activities (and any other relevant thoughts).

The 2013 LW Survey found that 28.6% of respondents identified as effective altruists. This rate is much higher than the rate in the general population (even after controlling for intelligence), and because LW is distinguished by virtue of being a community focused on rationality, one might be led to the conclusion that increasing rationality increases effective altruism. But there are a number of possible confounding factors: 

  1. It's ambiguous what the respondents meant when they said that they're "effective altruists." (They could have used the term the way Wikipedia does, or they could have meant it in a more colloquial sense.)
  2. Interest in rationality and interest in effective altruism might both stem from an underlying dispositional variable.
  3. Effective altruists may be disproportionately likely to seek to improve their epistemic rationality than are members of the general population.
  4. The rationalist community and the effective altruist community may have become intertwined by historical accident, out of virtue of having some early members in common.

So it's helpful to look beyond the observed correlation and think about the hypothetical causal pathways between increased rationality and increased effective altruism.

The above claim can be broken into several subclaims (any or all of which may be intended):

Claim 1: When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value. 

Claim 2: When people are more rational, they're more likely to succeed in their altruistic endeavors.

Claim 3: Being more rational strengthens people's altruistic motivation.


Claim 1: "When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value."

Some elements of effective altruism thinking are:

  • Consequentialism. In Yvain's Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value" upon reflection. Rationality seems useful for recognizing that there's a tension between these principles and other common moral intuitions, but this doesn't necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it's also not sufficient.
  • Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one's altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one's intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension. However, in this case, if one does seek to resolve the tension, the choice of expected value maximization over other alternatives is canonical, so rationality seems to take one further toward expected value maximization than to consequentialism. 
  • The principle of indifference. — The idea that from an altruistic point of view, we should care about people who are unrelated to us as much as we do about people who are related to us. For example, in The Life You Can Save: How to Do Your Part to End World Poverty, Peter Singer makes the case that we should show a similar degree of moral concern for people in the developing world who are suffering from poverty as we do to people in our neighborhoods. I'd venture the guess its popularity among rationalists is an artifact of culture or a selection effect rather than a consequence of rationality. Note that concern about global poverty is far more prevalent than interest in rationality (while still being low enough so that global poverty is far from alleviated).

Claim 2: "When people are more rational, they're more likely to succeed in their altruistic endeavors."

If "rationality" is taken to be "instrumental rationality" then this is tautologically true, so the relevant sense of "rationality" here is "epistemic." 

  • The question of how useful epistemic rationality is in general has been debated, (e.g. herehereherehere, and here). 
  • I think that epistemic rationality matters more for altruistic endeavors than it does in other contexts. Cognitive biases were developed for survival and evolutionary fitness, and these things correlate more strongly with personal well-being than with the well-being of others. I think that epistemic rationality matters still more for those who aspire to maximize utilitarian expected value: cognitive biases correlate more strongly with well-being of others within one's social circles than they do with the well-being of those outside of one's social circles.
  • In Cognitive Biases Potentially Affecting Judgment of Global Risks, Eliezer describes some cognitive biases that can lead one to underestimate the likelihood of risks of human extinction. To the extent that reducing these risks is the most promising philanthropic cause (as Eliezer has suggested), reducing cognitive biases improves people's prospects of maximizing utilitarian expected value.

Claim 3: "Being more rational strengthens people's altruistic motivation."

  • I think that there may be some effect in this direction mediated through improved well-being: when people's emotional well-being increases, their empathy also increases. 
  • It's possible to come to the conclusion that one should care as much about others as one does about oneself through philosophical reflection, and I know people who have had this experience. I don't know whether or not this is accurately described as an effect attributable to improved accuracy of beliefs, though.

Putting it all together

The considerations above point in the direction of increased rationality of a population only slightly (if at all?) increasing the effective altruism at the 50th percentile of the population, but increasing the effective altruism at higher percentiles more, with the skewing becoming more and more extreme the further up one goes. This is in parallel with, e.g. the effect of height on income.

My own experience

In A personal history of involvement with effective altruism I give some relevant autobiographical information. Summarizing and elaborating a bit:

  • I was fully on board with consequentialism and with ascribing similar value to strangers as to familiar people as an early teenager, before I had any knowledge of cognitive biases as such, and at a time when my predictive model of the world was in many ways weaker than those of most adults.
  • It was only when I read Eliezer's posts that the justification for expected value maximization in altruistic contexts clicked. Understanding it didn't require background knowledge — it seems independent of most aspects of rationality.
  • I started reading Less Wrong because a friend pointed me to Yvain's posts on utilitarianism. My interest in rationality was more driven by my interest in effective altruism than the other way around. This is evidence that the high fraction of Less Wrongers who identify as effective altruists is partially a function of it being an attractor.
  • So far increased rationality hasn't increased my productivity to a degree that's statistically significant. There are changes that have occurred in my thinking that greatly increase my productivity in the most favorable possible future scenarios, relative to a counterfactual in which these changes hadn't occurred. This is in consonance with my remark under the "putting it all together" heading above. 

How about you?


To what extent does improved rationality lead to effective altruism?
New Comment
156 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Ixiel200

Sorry if this is obviously covered somewhere but every time I think I answer it in either direction I immediately have doubts.

Does EA come packaged with "we SHOULD maximize our altruism" or does it just assert that IF we are giving, well, anything worth doing is worth doing right?

For example, I have no interest in giving materially more than I already do, but getting more bang for my buck in my existing donations sounds awesome. Do I count? I currently think not but I've changed my mind enough to just ask.

5JonahS
It's a semantic distinction, but I would count yourself – every bit counts. There is some concern that the EA movement will become "watered down," but the concern is that epistemic standards will fall, not that the average percentage donated by members of the movement will fall.
1diegocaleiro
Well, distortion of ideas and concepts within EA can go a long way. It doesn't hurt to be prepared for some meaning shift as well.
2ESRogs
I dunno, does Holden Karnofsky count as an EA? See: http://blog.givewell.org/2007/01/06/limits-of-generosity/. You count in my book.
1Lukas_Gloor
I think it is viewed as something in between by the EA community. Dedicating 10% of your time and resources in order to most effectively help others would definitely count as EA according to most self-identifying EAs, while 1% probably wouldn't, but it's a spectrum anyway. EA does not necessarily include any claims about moral realism / universally binding "shoulds", at least not as I understand it. It comes down to what you want to do.
  • Consequentialism. In Yvain's Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value" upon reflection. Rationality seems useful for recognizing that there's a tension between these principles and other common moral intuitions, but this doesn't necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it's also not sufficient.

  • Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one's altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one's intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension

... (read more)
1JonahS
Good points. Is my intended meaning clear?
4Sniffnoy
I mean, kind of? It's still all pretty mixed-up though. Enough people get consequentialism, expected utility maximization, and utilitarianism mixed up that I really don't think it's a good thing to further confuse them.

Being more rational makes rationalization harder. When confronted with thought experiments such as Peter Singer's drowning child example, it makes it harder to come up with reasons for not changing one's actions while still maintaining a self-image of being caring. While non-rationalists often object to EA by bringing up bad arguments (e.g. by not understanding expected utility theory or decision-making under uncertainty), rationalists are more likely to draw more radical conclusions. This means they might either accept the extreme conclusion that they wan... (read more)

2Jiro
I wouldn't suggest that people's response to dilemmas like Singer's is rationalization. Rather, I'd say that people have principles but are not very good at articulating them. If they say they should save a dying child because of some principle, that "principle" is just their best attempt to approximate the actual principle that they can't articulate. If the principle doesn't fit when applied to another case, fixing up the principle isn't rationalization; it's recognizing that the stated principle was only ever an approximation, and trying to find a better approximation. (And if the fix up is based on bad reasoning, that's just "trying to find a better approximation, and making a mistake doing so".) It may be easier to see when not talking about saving children. If you tell me you don't like winter days, and I point out that Christmas is a winter day and you like Christmas, and you then respond "well, I meant a typical winter day, not a special one like Christmas", that's not a rationalization, that's just revising what was never a 100% accurate statement and should not have been expected to be.
1JonahS
Yes, this is a good point.

Interest in rationality and interest in effective altruism might both stem from an underlying dispositional variable.

My impression is that a lack of compartmentalization is a risk factor for both LW and EA group membership.

My impression is that a lack of compartmentalization is a risk factor for both LW and EA group membership.

My impression is also that it is a risk factor for religious mania.

Lack of compartmentalization, also called taking ideas seriously, when applied to religious ideas, gives you religious mania. Applied to various types of collective utilitarianism, can produce various anything from EA to antinatalism, from tithing to giving away all that you have. Applied to what it actually takes to find out how the world works, gives you Science.

Whether it's a good thing or a bad thing depends on what's in the compartments.

5TheOtherDave
Also on how conflicts are resolved.
2JonahS
Yes, this is a good point that I was semi-conscious of, but not sufficiently salient so that it occurred to me explicitly while writing my post.

My interest in rationality was more driven by my interest in effective altruism than the other way around.

This comment actually makes aspects of your writings here make sense, that did not make sense to me before.

Your post, overall, seems to have the assumption underlying it, that effective altruism is rational, and obviously so. I am not convinced this is the case (at the very least, not the "and obviously so" part).

To the extent that effective altruism is anything like a "movement", a "philosophy", a "community"... (read more)

-2JonahS
My post does carry the connotation "whether or not people engage in effective altruism is significant," but I didn't mean for it to carry the connotation that effective altruism is rational – on the contrary, that's the very question that I'm exploring :-) (albeit from the opposite end of the telescope). For an introduction to effective altruism, you could check out: * Peter Singer's TED talk * Yvain's article Efficient Charity: Do Unto Others... Are you familiar with them? Thanks also, for the feedback.

I've read Yvain's article, and reread it just now. It has the same underlying problem, which is: to the extent that it's obviously true, it's trivial[1]; to the extent that it's nontrivial, it's not obviously true.

Yvain talks about how we should be effective in the charity we choose to engage in (no big revelation here), then seems almost imperceptibly to slide into an assumed worldview where we're all utilitarians, where saving children is, of course, what we care about most, where the best charity is the one that saves the most children, etc.

To what extent are all of these things part of what "effective altruism" is? For instance (and this is just one possible example), let's say I really care about paintings more than dead children, and think that £550,000 paid to keep one mediocre painting in a UK museum is money quite well spent, even when the matter of sanitation in African villages is put to me as bluntly as you like; but I aspire to rationality, and want to purchase my artwork-retention-by-local-museums as cost-effectively as I can. Am I an effective altruist?

To put this another way: if "effective altruism" is really just "we should be effective in ... (read more)

6Said Achmiz
Ok, I've watched Singer's TED talk now, thank you for linking it. It does work as a statement of purpose, certainly. On the other hand it fails as an attempt to justify or argue for the movement's core values; at the same time, it makes it quite clear that effective altruism is not just about "let's be altruists effectively". It's got some specific values attached, more specific than can justifiably be called simply "altruism". I want to see, at least, some acknowledgment of that fact, and preferably, some attempt to defend those values. Singer doesn't do this; he merely handwaves in the general direction of "empathy" and "a rational understanding of our situation" (note that he doesn't explain what makes this particular set of values — valuing all lives equally — "rational"). Edit: My apologies! I just looked over your post again, and noticed this line, which my brain somehow ignored at first: That (in fact, that whole paragraph) does go far toward addressing my concerns. Consider the objections in this comment at least partially withdrawn!
0JonahS
Apology accepted :-). (Don't worry, I know that my post was long and that catching everything can require a lot of energy.)

In my understanding of things rationality does not involve values and altruism is all about values. They are orthogonal.

LW as a community (for various historical reasons) has a mix of rationalists and effective altruists. That's a characteristic of this particular community, not a feature of either rationalism or EA.

4Nornagest
Effective altruism isn't just being extra super altruistic, though. EA as currently practiced presupposes certain values, but its main insight isn't value-driven: it's that you can apply certain quantification techniques toward figuring out how to optimally implement your value system. For example, if an animal rights meta-charity used GiveWell-inspired methods to recommend groups advocating for veganism or protesting factory farming or rescuing kittens or something, I'd call that a form of EA despite the differences between its conception of utility and GiveWell's. Seen through that lens, effective altruism seems to have a lot more in common with LW-style rationality than, say, a preference for malaria eradication does by itself. We don't dictate values, and (social pressure aside) we probably can't talk people into EA if their value structure isn't compatible with it, but we might easily make readers with the right value assumptions more open to quantified or counterintuitive methods.
-2Lumifer
Yes, of course. So does effective proselytizing, for example. Or effective political propaganda. Take away the "presupposed values" and all you are left with is effectiveness.
-1Nornagest
Yes, LW exposure might easily dispose readers with those inclinations toward EA-like forms of political or religious advocacy, and if that's all they're doing then I wouldn't call them effective altruists (though only because politics and religion are not generally considered forms of altruism). That doesn't seem terribly relevant, though. Politics and religion are usually compatible with altruism, and nothing about effective altruism requires devotion solely to GiveWell-approved causes. I'm really not sure what you're trying to demonstrate here. Some people have values incompatible with EA's assumptions? That's true, but it only establishes the orthogonality of LW ideas with EA if everyone with compatible values was already an effective altruist, and that almost certainly isn't the case. As far as I can tell there's plenty of room for optimization. (It does establish an upper bound, but EA's market penetration, even after any possible LW influence, is nowhere near it.)
0Lumifer
That rationality and altruism are orthogonal. That effective altruism is predominantly altruism and "effective" plays a second fiddle to it. That rationality does not imply altruism (in case you think it's a strawman, tom_cr seems to claim exactly that).
2Nornagest
If effective altruism was predominantly just altruism, we wouldn't be seeing the kind of criticism of it from a traditionally philanthropic perspective that we have been. I see this as strong evidence that it's something distinct, and therefore that it makes sense to talk about something like LW rationality methods bolstering it despite rationality's silence on pure questions of values. Yes, it's just [a method of quantifying] effectiveness. But effectiveness in this context, approached in this particular manner, is more significant -- and, perhaps more importantly, a lot less intuitive -- than I think you're giving it credit for.
0Lumifer
I don't know about that. First, EA is competition for the limited resource, the donors' money, and even worse, EA keeps on telling others that they are doing it wrong. Second, the idea that charity money should be spend in effective ways is pretty uncontroversial. I suspect (that's my prior adjustable by evidence) that most of the criticism is aimed at specific recommendations of GiveWell and others, not at the concept of being getting more bang for your buck. Take a look at Bill Gates. He is explicitly concerned with the effectiveness and impact of his charity spending -- to the degree that he decided to bypass most established nonprofits and set up his own operation. Is he a "traditional" or an "effective" altruist? I don't know. Yes, I grant you that. Traditional charity tends to rely on purely emotional appeals. But I don't know if that's enough to push EA into a separate category of its own.
2Viliam_Bur
Rationality itself does not involve values. But a human who learns rationality already has those values, and rationality can help them understand those values better, decompartmentalize, and optimize more efficiently.
6Lumifer
So? Let's say I value cleansing the Earth of untermenschen. Rationality can indeed help me achieve my goals and "optimize more efficiently". Once you start associating rationality with sets of values, I don't see how can you associate it with only "nice" values like altruism, but not "bad" ones like genocide.
0Armok_GoB
Maybe, but at least they'll be campaigning for mandatory genetic screening for genetic disorders rather than kill people of some arbitrary ethnicity they happened to fixate on.
-1Oscar_Cunningham
Because there's a large set of "nice" values that most of humanity shares.
2Lumifer
Along with a large set of "not so nice" values that most of humanity shares as well. A glance at history should suffice to demonstrate that.
3Oscar_Cunningham
I think one of the lessons from history is that we can still massacre each other even when everyone is acting in good faith.
-11tom_cr

A couple of points:

(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant. For example you say

[Yvain] argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value"

Actually, consequentialism follows independently of "others have non zero value." Hence, classic utilitarianism's axiomatic call to maximize the good for the greatest numb... (read more)

0JonahS
I didn't quite have in mind classical utilitarianism in mind. I had in mind principles like * Not helping somebody is equivalent to hurting the person * An action that doesn't help or hurt someone doesn't have moral value. I did mean after controlling for ability to have an impact.
0tom_cr
Strikes me as a bit like saying "once we forget about all the differences, everything is the same." Is there a valid purpose to this indifference principle? Don't get me wrong, I can see that quasi-general principles of equality are worth establishing and defending, but here we are usually talking about something like equality in the eyes of the state, ie equality of all people, in the collective eyes of all people, which has a (different) sound basis.
5nshepperd
If you actually did some kind of expected value calculation, with your utility function set to something like U(thing) = u(thing) / causal-distance(thing), you would end up double-counting "ability to have an impact", because there is already a 1/causal-distance sort of factor in E(U|action) = sum { U(thing') P(thing' | action) } built into how much each action affects the probabilities of the different outcomes (which is basically what "ability to have an impact" is). That's assuming that what JonahSinick meant by "ability to have an impact" was the impact of the agent upon the thing being valued. But it sounds like you might have been talking about the effect of thing upon the agent? As if all you can value about something is any observable effect that thing can have on yourself (which is not an uncontroversial opinion)?
-4tom_cr
Value is something that exists in a decision-making mind. Real value (as opposed to fictional value) can only derive from the causal influences of the thing being valued on the valuing agent. This is just a fact, I can't think of a way to make it clearer. Maybe ponder this: How could my quality of life be affected by something with no causal influence on me?
0JonahS
Note that I wasn't arguing that it's rational. See the quotation in this comment. Rather, I was describing an input into effective altruist thinking.
0Said Achmiz
Thank you for bringing this up. I've found myself having to point out this distinction (between consequentialism and utilitarianism) a number of times; it seems a commonplace confusion around here.
0tom_cr
I see Sniffnoy also raised the same point.

Cognitive biases were developed for survival and evolutionary fitness, and these things correlate more strongly with personal well-being than with the well-being of others.

I think this needs to differentiated further or partly corrected:

  • Cognitive biases which improve individual fitness by needing less resources, i.e. heuristics which arrive at the same or almost equally good result but without less resources. Reducing time and energy thus benefits the individual. Example:

  • Cognitive biases which improve individual fitness by avoiding dangerous parts

... (read more)
3JonahS
Thanks for the thoughts. These points all strike me as reasonable.
2Lumifer
Why not? Rationalizing against (unreasonable) fear seems fine to me. Rationalizing against anger looks useful. Etc., etc.
0Gunnar_Zarncke
Yes. I didn't think this through to all its consequences. It is a well-know psychological fact that humans have a quite diverse set of basic fears that appear, develop and are normally overcome (understood, limited, suppressed,...) during childhood. Dealing with your fear, comming to terms with them is indeed a normal process. Quite a good read about this is Helping Children Overcome Fears. Indeed, having them initially is in the most cases adaptive (I wonder whether it would be a globally net positive if we could remove fear of spiders weighing up the cost of lost time and energy due to spider fear versus the remaining dangerous cases). The key point is that a very unspecific fear like fear of darkness is moderated into a form where it doesn't control you and where it only applies to cases that you didn't adapt to earlier (many people still freak out if put into extremely unusual situations which add (multiply?) multiple such fears). And whether having them in these cases is positive I can as best speculate on. Nonetheless this argument that many fears are less adaptive then they used to (because civilization weeded them out) is independent of the other emotions esp. the 'positive' ones like love, empathy, happiness and curiosity which it appears also do put you into a biased state. Whould you want to get rid of these too? Which?
-4Lumifer
Humans exist in permanent "biased state". The unbiased state is the province of Mr.Spock and Mr.Data, Vulcans and androids. I think that rationality does not get rid of biases, but rather allows you to recognize them and compensate for them. Just like with e.g. fear -- you rarely lose a particular fear altogether, you just learn to control and manage it.
0Gunnar_Zarncke
You seem to mean that biases are the brains way to perceive the world in a way that focusses on the 'important' parts. Beside terminal goals which just evaluate the perception with respect to utility this acts acts as a filter but thereby also implies goals (namely the reduction of the importance of the filtered out parts).
0Lumifer
Yes, but note that a lot of biases are universal to all humans. This means they are biological (as opposed to cultural) in nature. And this implies that the goals they developed to further are biological in nature as well. Which means that you are stuck with these goals whether you conscious mind likes it or not.
0Gunnar_Zarncke
Yes. That's what I meant when I said: "You wouldn't want to rationalize against your emotions. That will not work." If your conscious mind has goals incompatible with the effects of bioneuropsychological processes then frustrations seems the least result.
-2Lumifer
I still don't know about that. A collection of such "incompatible goals" has been described as civilization :-) For example, things like "kill or drive away those-not-like-us" look like biologically hardwired goals to me. Having a conscious mind have its own goals incompatible with that one is probably a good thing.
0Gunnar_Zarncke
Sure we have to deal with some of these inconsistencies. And for some of us this is an continuous source of frustration. But we do not have to add more to these than absolutely necessary, or?
2V_V
risk aversion is not a bias.
3Lumifer
It might or might not be. If it is coming from your utility function, it's not. If it is "extra" to the utility function it can be a bias.
-2tom_cr
I understood risk aversion to be a tendency to prefer a relatively certain payoff, to one that comes with a wider probability distribution, but has higher expectation. In which case, I would call it a bias.
0Said Achmiz
It's not a bias, it's a preference. Insofar as we reserve the term bias for irrational "preferences" or tendencies or behaviors, risk aversion does not qualify.
0tom_cr
I would call it a bias because it is irrational. It (as I described it - my understanding of the terminology might not be standard) involves choosing an option that is not the one most likely to lead to one's goals being fulfilled (this is the definition of 'payoff', right?). Or, as I understand it, risk aversion may amount to consistently identifying one alternative as better when there is no rational difference between them. This is also an irrational bias.
0Said Achmiz
Problems with your position: 1. "goals being fulfilled" is a qualitative criterion, or perhaps a binary one. The payoffs at stake in scenarios where we talk about risk aversion are quantitative and continuous. Given two options, of which I prefer the one with lower risk but a lower expected value, my goals may be fulfilled to some degree in both case. The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals. 2. The alternatives at stake are probabilistic scenarios, i.e. each alternative is some probability distribution over some set of outcomes. The expectation of a distribution is not the only feature that differentiates distributions from each other; the form of the distribution may also be relevant. Taking risk aversion to be irrational means that you think the form of a probability distribution is irrelevant. This is not an obviously correct claim. In fact, in Rational Choice in an Uncertain World [1], Robyn Dawes argues that the form of a probability distribution over outcomes is not irrelevant, and that it's not inherently irrational to prefer some distributions over others with the same expectation. It stands to reason (although Dawes doesn't seem to come out and say this outright, he heavily implies it) that it may also be rational to prefer one distribution to another with a lower (Edit: of course I meant "higher", whoops) expectation. [1] pp. 159-161 in the 1988 edition, if anyone's curious enough to look this up. Extra bonus: This section of the book (chapter 8, "Subjective Expected Utility Theory", where Dawes explains VNM utility) doubles as an explanation of why my preferences do not adhere to the von Neumann-Morgenstern axioms.
1tom_cr
Point 1: If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn't, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled. But risk is integral to the calculation of utility. 'Risk avoidance' and 'value' are synonyms. Point 2: Thanks for the reference. But, if we are really talking about a payoff as an increased amount of utility (and not some surrogate, e.g. money), then I find it hard to see how choosing an option that it less likely to provide the payoff can be better. If it is really safer (ie better, in expectation) to choose option 1, despite having a lower expected payoff than option 2, then is our distribution really over utility? Perhaps you could outline Dawes' argument? I'm open to the possibility that I'm missing something.
0Said Achmiz
Re: your response to point 1: again, the options in question are probability distributions over outcomes. The question is not one of your goals being 50% fulfilled or 51% fulfilled, but, e.g., a 51% probability of your goals being 100% fulfilled vs., a 95% probability of your goals being 50% fulfilled. (Numbers not significant; only intended for illustrative purposes.) "Risk avoidance" and "value" are not synonyms. I don't know why you would say that. I suspect one or both of us is seriously misunderstanding the other. Re: point #2: I don't have the time right now, but sometime over the next couple of days I should have some time and then I'll gladly outline Dawes' argument for you. (I'll post a sibling comment.)
0tom_cr
If I'm talking about a goal actually being 50% fulfilled, then it is. Really? I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don't know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service? If I'm terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from being close to a spider is less than otherwise. That would be very kind :) No need to hurry.
-1Said Achmiz
Dawes' argument, as promised. The context is: Dawes is explaining von Neumann and Morgenstern's axioms. ---------------------------------------- Aside: I don't know how familiar you are with the VNM utility theorem, but just in case, here's a brief primer. The VNM utility theorem presents a set of axioms, and then says that if an agent's preferences satisfy these axioms, then we can assign any outcome a number, called its utility, written as U(x); and it will then be the case that given any two alternatives X and Y, the agent will prefer X to Y if and only if E(U(X)) > E(U(Y)). (The notation E(x) is read as "the expected value of x".) That is to say, the agent's preferences can be understood as assigning utility values to outcomes, and then preferring to have more (expected) utility rather than less (that is, preferring those alternatives which are expected to result in greater utility). In other words, if you are an agent whose preferences adhere to the VNM axioms, then maximizing your utility will always, without exception, result in satisfying your preferences. And in yet other words, if you are such an agent, then your preferences can be understood to boil down to wanting more utility; you assign various utility values to various outcomes, and your goal is to have as much utility as possible. (Of course this need not be anything like a conscious goal; the theorem only says that a VNM-satisfying agent's preferences are equivalent to, or able to be represented as, such a utility formulation, not that the agent consciously thinks of things in terms of utility.) (Dawes presents the axioms in terms alternatives or gambles; a formulation of the axioms directly in terms of the consequences is exactly equivalent, but not quite as elegant.) N.B.: "Alternatives" in this usage are gambles, of the form ApB: you receive outcome A with probability p, and otherwise (i.e. with probability 1–p) you receive outcome B. (For example, your choice might be between two alternat
1nshepperd
This sounds like regular risk aversion, which is normally easy to model by transforming utility by some concave function. How do you show that there's an actual violation of the independence axiom from this example? Note that the axioms require that there exist a utility function u :: outcome -> real such that you maximise expected utility, not that some particular function (such as the two graphs you've drawn) actually represents your utility. In other words, you haven't really shown that "to prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom" since the two distributions don't have the form ApC, BpC with A≅B. Simply postulating that the expected utilities are the same only shows that that particular utility function is not correct, not that no valid utility function exists.
0Said Achmiz
Assuming you privilege some reference point as your x-axis origin, sure. But there's no good reason to do that. It turns out that people are risk averse no matter what origin point you select (this is one of the major findings of prospect theory), and thus the concave function you apply will be different (will be in different places along the x-axis) depending on which reference point you present to a person. This is clearly irrational; this kind of "regular risk aversion" is what Dawes refers to when he talks about independence axiom violation due to framing effects, or "pseudocertainty". The graphs are not graphs of utility functions. See the first paragraph of my post here. Indeed they do; because, as one of the other axioms states, each outcome in an alternative may itself be an alternative; i.e, alternatives may be constructed as probability mixtures of other alternatives, which may themselves be... etc. If it's the apparent continuity of the graphs that bothers you, then you have but to zoom in on the image, and you may pretend that the pixelation you see represents discreteness of the distribution. The point stands unchanged. The point Dawes is making, I think, is that any utility function (or at least, any utility function where the calculated utilities more or less track an intuitive notion of "personal value") will lead to this sort of preference between two such distributions. In fact, it is difficult to imagine any utility function that both corresponds to a person's preferences and doesn't lead to preferences for less negatively skewed probably distributions over outcomes, over more negatively skewed ones.
0Jiro
Couldn't this still be rational in general if the fact that a particular reference point is presented provides information under normal circumstances (though perhaps not rational in a laboratory setting)?
1Said Achmiz
I think you'll have to give an example of such a scenario before I could comment on whether it's plausible.
0nshepperd
What? This has nothing to do with "privileged reference points". If I am [VNM-]rational, with utility function U, and you consider an alternative function $ = exp(U) (or an affine transformation thereof), I will appear to be risk averse with respect to $. This doesn't mean I am irrational, it means you don't have the correct utility function. And in this case, you can turn the wrong utility function into the right one by taking log($). That is what I mean by "regular risk aversion". I know, they are graphs of P(U). Which is implicitly a graph of the composition of a probability function over outcomes with (the inverse of) a utility function. Okay, which parts, specifically, are A, B and C, and how is it established that the agent is indifferent between A and B? And I say that is assuming the conclusion. And, if only established for some set of utility functions that "more or less track an intuitive notion of "personal value"", fails to imply the conclusion that the independence axiom is violated for a rational human.
0Said Achmiz
It actually doesn't matter what the values are, because we know from prospect theory that people's preferences about risks can be reversed merely by framing gains as losses, or vice versa. No matter what shape the function has, it has to have some shape — it can't have one shape if you frame alternatives as gains but a different, opposite shape if you frame them as losses. True enough. I rounded your objection to the nearest misunderstanding, I think. Are you able to conceive of a utility function, or even a preference ordering, that does not give rise to this sort of preference over distributions? Even in rough terms? If so, I would like to hear it! The core of Dawes' argument is not a mathematical one, to be sure (and it would be difficult to make it into a mathematical argument, without some sort of rigorous account of what sorts of outcome distribution shapes humans prefer, which in turn would presumably require substantial field data, at the very least). It's an argument from intuition: Dawes is saying, "Look, I prefer this sort of distribution of outcomes. [Implied: 'And so do other people.'] However, such a preference is irrational, according to the VNM axioms..." Your objection seems to be: "No, in fact, you have no such preference. You only think you do, because your are envisioning your utility function incorrectly." Is that a fair characterization? Your talk of the utility function possibly being wrong makes me vaguely suspect a misunderstanding. It's likely I'm just misunderstanding you, however, so if you already know this, I apologize, but just in case: If you have some set of preferences, then (assuming your preferences satisfy the axioms), we can construct a utility function (up to positive affine transformation). But having constructed this function — which is the only function you could possibly construct from that set of preferences (up to positive affine transformation) — you are not then free to say "oh, well, maybe this is the wrong utilit
0nshepperd
Yes, framing effects are irrational, I agree. I'm saying that the mere existence of risk aversion with respect to something does not demonstrate the presence of framing effects or any other kind of irrationality (departure from the VNM axioms). That would be one way of describing my objection. The argument Dawes is making is simply not valid. He says "Suppose my utility function is X. Then my intuition says that I prefer certain distributions over X that have the same expected value. Therefore my utility function is not X, and in fact I have no utility function." There are two complementary ways this argument may break: If you take as a premise that the function X is actually your utility function (ie. "assuming I have a utility function, let X be that function") then you have no license to apply your intuition to derive preferences over various distributions over the values of X. Your intuition has no facilities for judging meaningless numbers that have only abstract mathematical reasoning tying them to your actual preferences. If you try to shoehorn the abstract constructed utility function X into your intuition by imagining that X represents "money" or "lives saved" or "amount of something nice" you are making a logical error. On the other hand, if you start by applying your intuition to something it understands (such as "money" or "amount of nice things") you can certainly say "I am risk averse with respect to X", but you have not shown that X is your utility function, so there's no license to conclude "I (it is rational for me to) violate the VNM axioms". No, but that doesn't mean such a thing does not exist!
0Said Achmiz
Well, now, hold on. Dawes is not actually saying that (and neither am I)! The claim is not "risk aversion demonstrates that there's a framing effect going on (which is clearly irrational, and not just in the 'violates VNM axioms' sense)". The point is that risk aversion (at least, risk aversion construed as "preferring less negatively skewed distributions") constitutes departure from the VNM axioms. The independence axiom strictly precludes such risk aversion. Whether risk aversion is actually irrational upon consideration — rather than merely irrational by technical definition, i.e. irrational by virtue of VNM axiom violation — is what Dawes is questioning. That is not a good way to characterize Dawes' argument. I don't know if you've read Rational Choice in an Uncertain World. Earlier in the same chapter, Dawes, introducing von Neumann and Morgenstern's work, comments that utilities are intended to represent personal values. This makes sense, as utilities by definition have to track personal values, at least insofar as something with more utility is going to be preferred (by a VNM-satisfying agent) to something with less utility. Given that our notion of personal value is so vague, there's little else we can expect from a measure that purports to represent personal value (it's not like we've got some intuitive notion of what mathematical operations are appropriate to perform on estimates of personal value, which utilities then might or might not satisfy...). So any VNM utility values, it would seem, will necessarily match up to our intuitive notions of personal value. So the only real assumption behind those graphs is that this agent's utility function tracks, in some vague sense, an intuitive notion of personal value — meaning what? Nothing more than that this person places greater value on things he prefers, than on things he doesn't prefer (relatively speaking). And that (by definition!) will be true of the utility function derived from his preferences. It
0nshepperd
No, it doesn't. Not unless it's literally risk aversion with respect to utility. That seems to me a completely unfounded assumption. The fact that the x-axis is not labeled is exactly why it's unreasonable to think that just asking your intuition which graph "looks better" is a good way of determining whether you have an actual preference between the graphs. The shape of the graph is meaningless.
0tom_cr
Thanks very much for the taking the time to explain this. It seems like the argument (very crudely) is that, "if I lose this game, that's it, I won't get a chance to play again, which makes this game a bad option." If so, again, I wonder if our measure of utility has been properly calibrated. It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, but money is not identical to utility. Nonetheless, those exponential distributions make a very interesting argument. I'm not entirely sure, I need to mull it over a bit more. Thanks again, I appreciate it.
0Said Achmiz
Just a brief comment: the argument is not predicated on being "kicked out" of the game. We're not assuming that even the lowest-utility outcomes cause you to no longer be able to continue "playing". We're merely saying that they are significantly worse than average.
0tom_cr
Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid. One generalization might be something like, "losing makes it harder to continue playing competitively." But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I'll continue to ponder. The problem feels related to Pascal's wager - how to deal with the low-probability disaster.
2Said Achmiz
I really do want to emphasize that if you assume that "losing" (i.e. encountering an outcome with a utility value on the low end of the scale) has some additional effects, whether that be "losing takes you out of the game", or "losing makes it harder to keep playing", or whatever, then you are modifying the scenario, in a critical way. You are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have. I want to urge you to take those graphs literally, with the x-axis being Utility, not money, or "utility but without taking into account secondary effects", or anything like that. Whatever the actual utility of an outcome is, after everything is accounted for — that's what determines that outcome's position on the graph's x-axis. (Edit: And it's crucial that the expectation of the two distributions is the same. If you find yourself concluding that the expectations are actually different, then you are misinterpreting the graphs, and should re-examine your assumptions; or else suitably modify the graphs to match your assumptions, such that the expectations are the same, and then re-evaluate.) This is not a Pascal's Wager argument. The low-utility outcomes aren't assumed to be "infinitely" bad, or somehow massively, disproportionately, unrealistically bad; they're just... bad. (I don't want to get into the realm of offering up examples of bad things, because people's lives are different and personal value scales are not absolute, but I hope that I've been able to clarify things at least a bit.)
-2tom_cr
Thanks, that focuses the argument for me a bit. So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn't been correctly drawn. If B is worse than A, how can their average payoffs be the same? To put it the other way around, maybe the curves are correct, but in that case, where does the conclusion that B is worse come from? Is there an algebraic formula to choose between two such cases? What if A had a slightly larger decay constant, at what point would A cease to be better? I'm not saying I'm sure Dawes' argument is wrong, I just have no intuition at the moment for how it could be right.
2Said Achmiz
A point of terminology: "utility function" usually refers to a function that maps things (in our case, outcomes) to utilities. (Some dimension, or else some set, of things on the x-axis; utility on the y-axis.) Here, we instead are mapping utility to frequency, or more precisely, outcomes (arranged — ranked and grouped — along the x-axis by their utility) to the frequency (or, equivalently, probability) of the outcomes' occurrence. (Utility on the x-axis, frequency on the y-axis.) The term for this sort of graph is "distribution" (or more fully, "frequency [or probability] distribution over utility of outcomes"). To the rest of your comment, I'm afraid I will have to postpone my full reply; but off the top of my head, I suspect the conceptual mismatch here stems from saying that the curves are meant to "quantify betterness". It seems to me (again, from only brief consideration) that this is a confused notion. I think your best bet would be to try taking the curves as literally as possible, attempting no reformulation on any basis of what you think they are "supposed" to say, and proceed from there. I will reply more fully when I have time.

I think it is rationally optimal for me to not give any money away since I need all of it to pursue rationally-considered high-level goals. (Much like Eliezer probably doesn't give away money that could be used to design and build FAI --because given the very small number of people now working on the problem, and given the small number of people capable of working on the problem, that would be irrational of him). There's nothing wrong with believing in what you're doing, and believing that such a thing is optimal. ...Perhaps it is optimal. If it's not, th... (read more)

Another effect: people on LW are massively more likely to describe themselves as effective altruists. My moral ideals were largely formed before I came into contact with LW, but not until I started reading was I introduced to the term "effective altruism".

The question appears to assume that LW participation is identically equal to improved rationality. Involvement in LW and involvement in EA is pretty obviously going to be correlated given they're closely related subcultures.

If this is not the case: Do you have a measure to hand of "improved rationality" that doesn't involve links to LW?

[-][anonymous]-10

The principle of indifference. — The idea that from an altruistic point of view, we should care about people who are unrelated to us as much as we do about people who are related to us. For example, in The Life You Can Save: How to Do Your Part to End World Poverty, Peter Singer makes the case that we should show a similar degree of moral concern for people in the developing world who are suffering from poverty as we do to people in our neighborhoods. I'd venture the guess its popularity among rationalists is an artifact of culture or a selection effect r

... (read more)
1Cyan
I'm a loyal tovarisch of Soviet Canuckistan, and I have to say that doesn't seem like a conundrum to me: there's no direct contradiction between basing one's charitable giving on evidence about charitable organizations' effectivenesses and thinking that markets in which individual are free to act will lead to more preferable outcomes than markets with state-run monopolies/monopsonies.
2[anonymous]
The whole point of the second quotation and the paragraph after that was to... Oh never mind, should I just assume henceforth that contrary to its usage in socialist discourse, to outsiders "socialism" always means state-owned monopoly? In that case, what sort of terminology should I use for actual worker control of the means of production, and such things?
1Cyan
"Anarcho-syndicalism" maybe? All's I know is that my socialized health insurance is a state-run oligopsony/monopoly (and so is my province's liquor control board). In any event, if direct redistribution of wealth is the key identifier of socialism, then Milton Friedman was a socialist, given his support for negative income taxes. Prolly the best thing would be to avoid jargon as much as possible when talking to outsiders and just state what concrete policy you're talking about. For what it's worth, it seems to me that you've used the term "socialism" to refer to two different, conflated, specific policies. In the OP you seem to be talking about direct redistribution of money, which isn't necessarily equivalent to the notion of worker control of the means of production that you introduce in the parent; and the term "socialism" doesn't pick out either specific policy in my mind. (An example of how redistribution and worker ownership are not equivalent: on Paul Krugman's account, if you did direct redistribution right now, you'd increase aggregate demand but not even out ownership of capital. This is because current household consumption seems to be budget-constrained in the face of the ongoing "secular stagnation" -- if you gave poor people a whack of cash or assets right now, they'd (liquidate and) spend it on things they need rather than investing/holding it. )
5[anonymous]
Ah, here's the confusion. No, in the OP I was talking about worker control of the means of production, and criticizing Effective Altruism for attempting to fix poverty and sickness through what I consider an insufficiently effective intervention, that being direct redistribution of money.
1Cyan
Oh, I see. Excellent clarification. How would you respond to (what I claim to be) Krugman's account, i.e., in current conditions poor households are budget-constrained and would, if free to do so, liquidate their ownership of the means of production for money to buy the things they need immediately? Just how much redistribution of ownership are you imagining here?
2[anonymous]
Basically, I accept that critique, but only at an engineering level. Ditto on the "how much" issue: it's engineering. Neither of these issues actually makes me believe that a welfare state strapped awkwardly on top of a fundamentally industrial-capitalist, resource-capitalist, or financial-capitalist system - and constantly under attack by anyone perceiving themselves as a put-upon well-heeled taxpayer to boot - is actually a better solution to poverty and inequality than a more thoroughly socialist system in which such inequalities and such poverty just don't happen in the first place (because they're not part of the system's utility function). I certainly believe that we have not yet designed or located a perfect socialist system to implement. What I do note, as addendum to that, is that nobody who supports capitalism believes the status quo is a perfect capitalism, and most people who aren't fanatical ideologues don't even believe we've found a perfect capitalism yet. The lack of a preexisting design X and a proof that X Is Perfect do not preclude the existence of a better system, whether redesigned from scratch or found by hill-climbing on piecemeal reforms. All that lack means is that we have to actually think and actually try -- which we should have been doing anyway, if we wish to act according to our profession to be rational.
1Cyan
Good answer. (Before this comment thread I was, and I continue to be, fairly sympathetic to these efforts.)
2[anonymous]
Thanks!
0Lumifer
An interesting question :-D How do you define "socialism" in this context?
3[anonymous]
I would define it here as any attempt to attack economic inequality at its source by putting the direct ownership of capital goods and resultant products in the hands of workers rather than with a separate ownership class. This would thus include: cooperatives of all kinds, state socialism (at least, when a passable claim to democracy can be made), syndicalism, and also Georgism and "ecological economics" (which tend to nationalize/publicize the natural commons and their inherent rentier interest rather than factories, but the principle is similar). A neat little slogan would be to say: you can't fix inequality through charitable redistribution, not by state sponsorship nor individual effort, but you could fix it through "predistribution" of property ownership to ensure nobody is proletarianized in the first place. (For the record: "proletarianized" means that someone lacks any means of subsistence other than wage-labor. There are very many well-paid proletarians among the Western salariat, the sort of people who read this site, but even this well-off kind of wage labor becomes extremely problematic when the broader economy shifts -- just look at how well lawyers or doctors are faring these days in many countries!) I do agree that differences of talent, skill, education and luck in any kind of optimizing economy (so not only markets but anything else that rewards competence and punishes incompetence) will eventually lead to some inequalities on grounds of competence and many inequalities due to network effects, but I don't think this is a good ethical excuse to abandon the mission of addressing the problem at its source. It just means we have to stop pretending to be wise by pointing out the obvious problems and actually think about how to accomplish the goal effectively.
2Eugine_Nier
The distinction you're making with the word "proletarianized" doesn't really make sense when the market value of the relevant skills are larger than the cost of the means of production. Owning the means of production doesn't help here since the broader economic shifts can make your factory obsolete just as easily as they can make your skills obsolete.
1Eugine_Nier
That doesn't work in the long term. What happens to a socialized factory when demand for the good it produces decreases? A capitalist factory would lay off some workers, but a socialized factory can't do that, so winds up making uncompetitive products. The result is that the socialized factory will eventually get out-competed by capitalist factories. If you force all factories to be socialized, this will eventually lead to economic stagnation. (By the way, this is not the only problem with socialized factories.)
2[anonymous]
Why not? Who says? Are we just automatically buying into everything the old American propagandists say now ;-)? I've not even specified a model and you're already making unwarranted assumptions. It sounds to me like you've got a mental stop-sign.
0Eugine_Nier
But you did refer to real world examples.
-1Lumifer
That's a reasonable definition. By the way, do you think that EA should tackle the issue of economic inequality, or does EA assert that itself?
3[anonymous]
I think EA very definitely targets both poverty and low quality of life, I think factual evidence shows that inequality appears to have a detectable effect on both poverty (defined even in an absolute sense: less egalitarian populations develop completely impoverished sub-populations more easily) and well-being in general (the well-being effects, surprisingly, show up all across the class spectrum), and that therefore someone who cares about optimizing away absolute poverty and optimizing for well-being should care about optimizing for the level of inequality which generates the least poverty and the most well-being. Obviously the factual portions of this belief are subject to update, on top of my own innate preference for more egalitarian interactions (which is strong enough that it has never seemed to change). My preferences could tilt me towards seeing/acknowledging one set of evidence rather than another, towards believing that "the good is the true", but I was actually as surprised as anyone when the sociological findings showed that rich people are worse off in less-egalitarian societies. EDIT: Here is a properly rigorous review, and here is a critique.
0Lumifer
To make an obvious observation, targeting poverty and targeting economic inequality are very different things. It is clear that EA targets "low quality of life", but my question was whether EA people explicitly target economic inequality -- or they don't and you think they should? Note that asserting that inequality affects absolute poverty does NOT imply that getting rid of inequality is the best method of dealing with poverty.
3[anonymous]
As far as I'm aware, EA people do not currently explicitly target economic inequality. I am attempting to claim that they have instrumental reason to shift towards doing so. It certainly doesn't, but my explanation of capitalism also attempted to show that in this particular system, inequality and absolute poverty share a common cause. Proletarianized Nicaraguan former-peasants are absolutely poor, but their poverty was created by the same system that is churning out inequality -- or so I believe on the weight of my evidence. If there's a joint of reality I've completely failed to cleave, please let me know.
1Lumifer
Poverty is not created. Poverty is the default state that you do (or do not) get out of. Historical evidence shows that capitalist societies are pretty good at getting large chunks of their population out of poverty. The same evidence shows that alternatives to capitalism are NOT good at that. The absolute poverty of Russian peasants was not created by capitalism. Neither has the absolute poverty of the Kalahari Bushmen or the Yanomami. But look at what happened to China once capitalism was allowed in.
2TheAncientGeek
There's evidence that capitalism plus social democracy works to increase well being. You can't infer from that that capitalism is doing all the heavy lifting.
0Lumifer
There is evidence that capitalism without social democracy works to increase well-being. Example: contemporary China.
1[anonymous]
You're speaking in completely separate narratives to counter a highly specific scenario I had raised -- and one which has actually happened! Here is the fairly consistent history of actually existing capitalism, in most societies: an Enclosure Movement of some sort modernizes old property structures, this proletarianizes some of the peasants or other subsistence farmers (that is, it removes them from their traditionally-inhabited land), this creates a cheap labor force which is then used in new industries but who have a lower standard of living than their immediate peasant forebears, this has in fact created both absolute poverty and inequality (compared to the previous non-capitalist system). Over time, the increased productivity raises the mean standard of living, whether or not the former-peasants get access to that new higher standard of living seems to be a function of how egalitarian society's arrangements are at that time; history appears to show that capitalism usually starts out quite radically inegalitarian but is eventually forced to make egalitarian concessions (these being known as "social democracy" or "welfare programs" in political-speak) that, after decades of struggle, finally raise the formerly-peasant now-proletarians above the peasant standard of living. Note how there is in fact causation here: it all starts when a formerly non-industrial society chooses to modernize and industrialize by shifting from a previous (usually feudal and mostly agricultural) mode of production that involved a complicated system of inter-class concessions to mutual survival, to a new and simplified system of mass production, freehold property titles, and no inter-class concessions whatsoever. It's not natural (or rather, predestined: the past doesn't reach around the present to write the future), it's a function of choices people made, which are themselves functions of previous causes, and so on backwards in time. All we "radicals" are actually pointing out is that
4Lumifer
I am sorry, are you claiming that there was neither inequality nor absolute poverty in pre-capitalist societies?? I don't understand what are you trying to say about causality. Huh? You can make different choices in the present to affect the future. That's very far from "at any point in history". So, any idea why all attempts to do so have ended pretty badly so far?
2[anonymous]
I'm claiming, on shakier evidence than previously but still to the best of my own knowledge, that late-feudal societies were somewhat more egalitarian than early-capitalist ones. The peasants were better off than the proletarians: they were poor, but they had homes, wouldn't starve on anyone's arbitrary fiat, and lower population density made disease less rampant. The point being: if we consider this history of capitalism as a "story", then different countries are in different places in the story (including some parts I didn't list in the story because they're just plain not universal). If you know what sort of process is happening to you, you can choose differently than if you were ignorant (this is a truism, but it bares repeating when so many people think economic history is some kind of destiny unaffected by any choice beyond market transactions). They haven't. You're raising the most famous failed leftist experiments to salience and falsely generalizing. In fact, in the case of Soviet Russia and Red China, you're basically just generalizing from two large, salient examples. Then there's the question of whether "socialism fails" actually cleaves reality at the joint: was it socialism failing in Russia and China, or totalitarianism, or state-managerialism (remember, I've already dissolved to the level where these are three different things that can combine but don't have to)? Remember, until the post-WW2 prosperity of the social-democratic era, the West was quite worried about how quickly and effectively the Soviets were able to grow their economy, especially their military economy. In other posts I've listed off quite a lot of different options and engineering considerations for pro-egalitarian and anti-poverty economic optimizations. The fundamental point I'm trying to hammer home, though, is that these are engineering considerations. You do not have to pick some system, like "American capitalism" or "Soviet Communism" or "European social democracy", and tre
2Lumifer
I don't think any of that is true -- they neither "had homes" (using the criteria under which the proletariat didn't), nor "wouldn't starve", and disease wasn't "less rampant", too. You seem to be engaging in romanticizing some imagined pastoral past. Not to mention that you're talking about human universals so I don't see any reasons to restrict ourselves geographically to Europe or time-wise to the particular moment when pre-capitalist societies were changing over to capitalist. Will you make the same claims with respect to Asian or African societies? And how about comparing peak-feudal to peak-capitalist societies? Well, of course, but I fail to see the implications. At the whole-society level, they have. Besides Mondragon you might have mentioned kibbutzim which are still around. However neither kibbutzim nor Mondragon are rapidly growing and taking over the world. Kibbutzim are in decline and Mondragon is basically just another corporation, surviving but not anomalously successful. Can you run coops and anarcho-syndicalist communes in contemporary Western societies? Of course you can! The same way you can run religious cults and new age retreats and whatnot. Some people like them and will join them. Some. Very few. I strongly disagree. I think there are basic-values considerations as well as "this doesn't do what you think it does" considerations.
5[anonymous]
I see no reason why an optimal system for achieving the human good in the realm of economics must necessarily conquer and destroy its competitors or, as you put it, "take over the world". In fact, the popular distaste for imperialism rather strongly tells me quite the opposite! "Capitalism paper-clips more than the alternatives" is not a claim in favor of capitalism. Ok? Can we nail this down to a dispute over specific facts and have one of us update on evidence, or do you want to keep this in the realm of narrative? Have you considered that in most states within the USA, you cannot actually charter a cooperative? There's simply no statute for it. In those states and countries where cooperatives can be chartered, they're a successful if not rabidly spreading form of business, and many popular brands are actually, when you check their incorporation papers, cooperatives. More so in Europe. Actually, there's been some small bit of evidence that cooperatives thrive more than "ordinary" companies in a financial crisis (no studies have been done about "all the time", so to speak), because their structure keeps them more detached and less fragile with respect to the financial markets. Then I think you should state the basic values you're serving, and argue (very hard, since this is quite the leap you've taken!) that "taking over the world" is a desirable instrumental property for an economic system, orthogonally to its ability to cater to our full set of actual desires and values. Be warned that, to me, it looks as if you're arguing in favor of Clippy-ness being a virtue.
3TheAncientGeek
There's an argument that you should have the greatest variety of structures and institutions possible to give yourself Talebian robustness....to prevent everything failing at the same time for the same reasons.
0[anonymous]
Yes, there is. This certainly argues in favor of trying out additional institutional forms.
1Lumifer
Because, in the long term, there can be only one. If your "optimal system" does not produce as much growth/value/output as the competition, the competition will grow relatively stronger (exponentially, too) every day. Eventually your "optimal system" will be taken over, by force or by money, or just driven into irrelevance. Look at, say, religious communities like the Amish. The can continue to exist as isolated pockets as long as they are harmless. But they have no future. This seems to be a 10,000-feet-level discussion, so I think we can just note our disagreement without getting bogged down in historical minutae. Any particular reason why you can't set it up as a partnership or a corporation with a specific corporate charter? Besides, I don't think your claim is true. Credit unions are very widespread and they're basically coops. There are mutual savings banks and mutual insurance companies which are, again, basically coops. That line of argument doesn't have anything to do with taking over the world. It is focused on trade-offs between values (specifically, that in chasing economic equality you're making bad trade-offs, of course that depends on what your values are) and on a claim that you're mistaken about the consequences of establishing particular socioeconomic systems.
2TheAncientGeek
Note that systems aren't competing to produce $$$, they are competing to produce QoL. Europeans are happy to live in countries an inch away from bankruptcy because they get free healthcare and rich cultural heritage and llow crime.... Note also that societies use each others products and services, and the natural global ecosystem might have niches for herbivores as well as carnivores.
1Lumifer
That depends on your analysis framework. If you're thinking about voluntary migrations, quality of life matters a lot. But if you're thinking of scenarios like "We'll just buy everything of worth in this country", for example, $$$ matter much more. And, of course, if the push comes to shove and the miltiary gets involved... That's a good point. But every player in an ecosystem must produce some value in order not to die out.
2TheAncientGeek
Attempts4 at byouts and world domination tend to produce concerted opposition
1Lumifer
For a historical example consider what happened to the Americas when the Europeans arrived en masse.
2bramflakes
It's not that accurate to describe Europeans "conquering" the Americas, more like moving in after the smallpox did most of the dirty work then mopping up the remainder. A better example is Africa, where it was unquestionably deliberate acts of aggression that saw nearly the whole continent subdued.
6TheAncientGeek
Either way, it's not relevant to the "best" politcal system taking over, because it's about opportunity, force of numbers, technology, and, GERMS.
-1bramflakes
And genes.
6TheAncientGeek
If one genotype took over, that would be fragile. Like pandas . Diversity is robustness.
6bramflakes
I dunno man, milk digestion worked out well for Indo-Europeans.
2[anonymous]
If by genes you mean smallpox and hepatitis resistance genes, yes.
0Lumifer
That too, but I think Americas are a better example because nowadays the mainstream media is full of bison excrement about how Native Americans led wise, serene, and peaceful lives in harmony with Nature until the stupid and greedy white man came and killed them all.
2bramflakes
Maybe it's a provincial thing. Europeans get the same or similar thing about our great-grandfathers' treatment of Africans. Here in Britain we get both :/
2TheAncientGeek
For a counterexample, see WWII. Sure, overwhelming technological superiority is overwhelming. But that's unlikely to happen again in a globalised world.
7Nornagest
WWII was, to oversimplify, provoked by a coalition of states attempting regional domination; but their means of doing so were pretty far from the "outcompete everyone else" narrative upthread, and in fact you could view them as being successful in proportion to how closely they hewed to it. I know the Pacific theater best; there, we find Japan's old-school imperialistic moves meeting little concerted opposition until they attacked Hawaii, Hong Kong, and Singapore, thus bringing the US and Britain's directly administered eastern colonies into the war. Pearl Harbor usually gets touted as the start of the war on that front, but in fact Japan had been taking over swaths of Manchuria, Mongolia, and China (in roughly that order) since 1931, and not at all quietly. You've heard of the Rape of Nanking? That happened in 1937, before the Anschluss was more than a twinkle in Hitler's eye. If the Empire of Japan had been content to keep picking on less technologically and militarily capable nations, I doubt the Pacific War as such would ever have come to a head.
0TheAncientGeek
In the modern world, attempts at takeover produce concerted opposition, because the modern world has the techontological and practical mechanisms to concert opposition. There are plenty of examples of takeovers in theaancient world because no one could send the message,"we've been taken oher and you could be next"
3Nornagest
Funny. I've just finished reading Herodotus's Histories, the second half of which could be described as chronicling exactly that message and the response to it. (There's a bit more to it, of course. In summary, the Greek-speaking Ionic states of western Turkey rebelled against their Persian-appointed satraps, supported by Athens and its allies; after putting down the revolt, Persia's emperor Darius elected to subjugate Athens and incidentally the rest of Aegean Greece in retaliation. Persia in a series of campaigns then conquered much of Greece before being stopped at Marathon; years later, Darius's son Xerxes decided to go for Round 2 and met with much the same results.)
0TheAncientGeek
And Genghis and Atilla ..
1Lumifer
Remind me, who owns that peninsula in the Black Sea now..? I think you severely underestimate the communication capabilities of the ancient world. You also overestimate the willingness of people to die for somebody else far away.
-1TheAncientGeek
Remind me, who's not in G8 anymore? Brits fought in Borneo during WWII. Yout may be succumbeing to Typical Country Fallacy.
-3Eugine_Nier
Remind me why anyone should care about the G8?
1Lumifer
I don't see why not. Besides, in this context we're not talking about world domination, we're talking about assimilating backward societies and spreading to them the light of the technological progress :-D
-1TheAncientGeek
Do they see it that way? Did anyone ask them? And why not is the fact that there are ways of telling what the other people are up to:its called espionage.
1TheAncientGeek
There's an object level argument against (kinds of) socialism in that they didn't work, and there's a meta level argument against engineering in general, that societies are too complex and organic for large scale artificial changes to have predictable effects.
-2[anonymous]
That's called an Argument From Ignorance. All societies consist mostly, sometimes even exclusively, of large-scale artificial changes. Did you think the cubicle was your ancestral environment?
4TheAncientGeek
I was using artificial to mean top-down or socially engineered.
-2[anonymous]
Good! So was I. The notion that societies evolve "bottom-up" - by any kind of general will rather than by the fiat and imposition of the powerful - is complete and total mythology.
0Lumifer
So tell me, which fiat imposed the collapse of the USSR?
-2[anonymous]
The committees of the Communist Party, from what I know of history. Who were, you know, the powerful in the USSR. If you're about to "parry" this sentence into saying, "Haha! Look what happens when you implement leftist ideas!", doing so will only prove that you're not even attempting to thoroughly consider what I am saying, but are instead just reaching for the first ideological weapon you can get against the Evil Threat of... whatever it is people of your ideological stripe think is coming to get you.
-3Lumifer
So, the USSR imploded because the "committees of the Communist Party" willed it to be so..? I am not sure we live in the same reality.
5TheAncientGeek
I find this exchange strange. My take is that Gorbachev attemopted limited reforms, from the top down, which opened a floodgat e of protest, from the bottom up.