To what extent does improved rationality lead to effective altruism?
It's been claimed that increasing rationality increases effective altruism. I think that this is true, but the effect size is unclear to me, so it seems worth exploring how strong the evidence for it is. I've offered some general considerations below, followed by a description of my own experience. I'd very much welcome thoughts on the effect that rationality has had on your own altruistic activities (and any other relevant thoughts).
The 2013 LW Survey found that 28.6% of respondents identified as effective altruists. This rate is much higher than the rate in the general population (even after controlling for intelligence), and because LW is distinguished by virtue of being a community focused on rationality, one might be led to the conclusion that increasing rationality increases effective altruism. But there are a number of possible confounding factors:
- It's ambiguous what the respondents meant when they said that they're "effective altruists." (They could have used the term the way Wikipedia does, or they could have meant it in a more colloquial sense.)
- Interest in rationality and interest in effective altruism might both stem from an underlying dispositional variable.
- Effective altruists may be disproportionately likely to seek to improve their epistemic rationality than are members of the general population.
- The rationalist community and the effective altruist community may have become intertwined by historical accident, out of virtue of having some early members in common.
So it's helpful to look beyond the observed correlation and think about the hypothetical causal pathways between increased rationality and increased effective altruism.
The above claim can be broken into several subclaims (any or all of which may be intended):
Claim 1: When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value.
Claim 2: When people are more rational, they're more likely to succeed in their altruistic endeavors.
Claim 3: Being more rational strengthens people's altruistic motivation.
Claim 1: "When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value."
Some elements of effective altruism thinking are:
- Consequentialism. In Yvain's Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value" upon reflection. Rationality seems useful for recognizing that there's a tension between these principles and other common moral intuitions, but this doesn't necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it's also not sufficient.
- Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one's altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one's intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension. However, in this case, if one does seek to resolve the tension, the choice of expected value maximization over other alternatives is canonical, so rationality seems to take one further toward expected value maximization than to consequentialism.
- The principle of indifference. — The idea that from an altruistic point of view, we should care about people who are unrelated to us as much as we do about people who are related to us. For example, in The Life You Can Save: How to Do Your Part to End World Poverty, Peter Singer makes the case that we should show a similar degree of moral concern for people in the developing world who are suffering from poverty as we do to people in our neighborhoods. I'd venture the guess its popularity among rationalists is an artifact of culture or a selection effect rather than a consequence of rationality. Note that concern about global poverty is far more prevalent than interest in rationality (while still being low enough so that global poverty is far from alleviated).
Claim 2: "When people are more rational, they're more likely to succeed in their altruistic endeavors."
If "rationality" is taken to be "instrumental rationality" then this is tautologically true, so the relevant sense of "rationality" here is "epistemic."
- The question of how useful epistemic rationality is in general has been debated, (e.g. here, here, here, here, and here).
- I think that epistemic rationality matters more for altruistic endeavors than it does in other contexts. Cognitive biases were developed for survival and evolutionary fitness, and these things correlate more strongly with personal well-being than with the well-being of others. I think that epistemic rationality matters still more for those who aspire to maximize utilitarian expected value: cognitive biases correlate more strongly with well-being of others within one's social circles than they do with the well-being of those outside of one's social circles.
- In Cognitive Biases Potentially Affecting Judgment of Global Risks, Eliezer describes some cognitive biases that can lead one to underestimate the likelihood of risks of human extinction. To the extent that reducing these risks is the most promising philanthropic cause (as Eliezer has suggested), reducing cognitive biases improves people's prospects of maximizing utilitarian expected value.
Claim 3: "Being more rational strengthens people's altruistic motivation."
- I think that there may be some effect in this direction mediated through improved well-being: when people's emotional well-being increases, their empathy also increases.
- It's possible to come to the conclusion that one should care as much about others as one does about oneself through philosophical reflection, and I know people who have had this experience. I don't know whether or not this is accurately described as an effect attributable to improved accuracy of beliefs, though.
Putting it all together
The considerations above point in the direction of increased rationality of a population only slightly (if at all?) increasing the effective altruism at the 50th percentile of the population, but increasing the effective altruism at higher percentiles more, with the skewing becoming more and more extreme the further up one goes. This is in parallel with, e.g. the effect of height on income.
My own experience
In A personal history of involvement with effective altruism I give some relevant autobiographical information. Summarizing and elaborating a bit:
- I was fully on board with consequentialism and with ascribing similar value to strangers as to familiar people as an early teenager, before I had any knowledge of cognitive biases as such, and at a time when my predictive model of the world was in many ways weaker than those of most adults.
- It was only when I read Eliezer's posts that the justification for expected value maximization in altruistic contexts clicked. Understanding it didn't require background knowledge — it seems independent of most aspects of rationality.
- I started reading Less Wrong because a friend pointed me to Yvain's posts on utilitarianism. My interest in rationality was more driven by my interest in effective altruism than the other way around. This is evidence that the high fraction of Less Wrongers who identify as effective altruists is partially a function of it being an attractor.
- So far increased rationality hasn't increased my productivity to a degree that's statistically significant. There are changes that have occurred in my thinking that greatly increase my productivity in the most favorable possible future scenarios, relative to a counterfactual in which these changes hadn't occurred. This is in consonance with my remark under the "putting it all together" heading above.
How about you?
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (156)
Sorry if this is obviously covered somewhere but every time I think I answer it in either direction I immediately have doubts.
Does EA come packaged with "we SHOULD maximize our altruism" or does it just assert that IF we are giving, well, anything worth doing is worth doing right?
For example, I have no interest in giving materially more than I already do, but getting more bang for my buck in my existing donations sounds awesome. Do I count? I currently think not but I've changed my mind enough to just ask.
It's a semantic distinction, but I would count yourself – every bit counts. There is some concern that the EA movement will become "watered down," but the concern is that epistemic standards will fall, not that the average percentage donated by members of the movement will fall.
Well, distortion of ideas and concepts within EA can go a long way. It doesn't hurt to be prepared for some meaning shift as well.
I dunno, does Holden Karnofsky count as an EA? See: http://blog.givewell.org/2007/01/06/limits-of-generosity/.
You count in my book.
I think it is viewed as something in between by the EA community. Dedicating 10% of your time and resources in order to most effectively help others would definitely count as EA according to most self-identifying EAs, while 1% probably wouldn't, but it's a spectrum anyway.
EA does not necessarily include any claims about moral realism / universally binding "shoulds", at least not as I understand it. It comes down to what you want to do.
This part seems a bit mixed up to me. This is partly because Yvain's Consequentialism FAQ is itself a bit mixed up, often conflating consequentialism with utilitarianism. "Others have nonzero value" really has nothing to do with consequentialism; one can be a consequentialist and be purely selfish, one can be non-consequentialist and be altruistic. "Morality lives in the world" is a pretty good argument for consequentialism all by itself; "others have nonzero value" is just about what type of consequences you should favor.
What's really mixed up here though is the end. When one talks about expected value maximization, one is always talking about the expected value over consequences; if you accept expected value maximization (for moral matters, anyway), you're already a consequentialist. Basically, what you've written is kind of backwards. If, on the other hand, we assume that by "consequentialism" you really meant "utilitarianism" (which, for those who have forgotten, does not mean maximizing expected utility in the sense discussed here but rather something else entirely[0]), then it would make sense; it takes you further towards maximizing expected value (consequentialism) than utilitarianism.
[0]Though it still is a flavor of consequentialism.
Good points. Is my intended meaning clear?
I mean, kind of? It's still all pretty mixed-up though. Enough people get consequentialism, expected utility maximization, and utilitarianism mixed up that I really don't think it's a good thing to further confuse them.
This comment actually makes aspects of your writings here make sense, that did not make sense to me before.
Your post, overall, seems to have the assumption underlying it, that effective altruism is rational, and obviously so. I am not convinced this is the case (at the very least, not the "and obviously so" part).
To the extent that effective altruism is anything like a "movement", a "philosophy", a "community", or really, anything less trivial than "well, altruism seems like the way to go, and we should be effective at things", it seems to me to need some justification, some arguing-for. I've not seen a whole lot of that. (Perhaps I have missed it.) I've not even seen a whole lot of really clear definitions, statements of purpose, or laying out of views.
So, do you happen to have a link handy, to something like a "this is what effective altruism is, and here's why it's a good idea, and obviously so"? (If not, then you might consider writing such a thing.)
My post does carry the connotation "whether or not people engage in effective altruism is significant," but I didn't mean for it to carry the connotation that effective altruism is rational – on the contrary, that's the very question that I'm exploring :-) (albeit from the opposite end of the telescope).
For an introduction to effective altruism, you could check out:
Are you familiar with them?
Thanks also, for the feedback.
I've read Yvain's article, and reread it just now. It has the same underlying problem, which is: to the extent that it's obviously true, it's trivial[1]; to the extent that it's nontrivial, it's not obviously true.
Yvain talks about how we should be effective in the charity we choose to engage in (no big revelation here), then seems almost imperceptibly to slide into an assumed worldview where we're all utilitarians, where saving children is, of course, what we care about most, where the best charity is the one that saves the most children, etc.
To what extent are all of these things part of what "effective altruism" is? For instance (and this is just one possible example), let's say I really care about paintings more than dead children, and think that £550,000 paid to keep one mediocre painting in a UK museum is money quite well spent, even when the matter of sanitation in African villages is put to me as bluntly as you like; but I aspire to rationality, and want to purchase my artwork-retention-by-local-museums as cost-effectively as I can. Am I an effective altruist?
To put this another way: if "effective altruism" is really just "we should be effective in our altruistic actions", then it seems frankly ridiculous that less than one-third of Less Wrong readers should identify as EA-ers. What do the other 71.4% think? That we should be ineffective altruists?? That altruism in general is just a bad idea? Do those two views really account for over seventy percent of the LW readership, do you think? Surely, in this case, the effective altruist movement just really needs to get better at explaining itself, and its obvious and uncontroversial nature, to the Less Wrong audience.
But effective altruism isn't just about that, yes? As a movement, as a philosophy, it's got all sorts of baggage, in the form of fairly specific values and ethical systems (that are assumed, and never really argued for, by EA-ers), like (a specific form of) utilitarianism, belief in things like the moral value of animals, and certain other things. Or, at least — such is the perception of people around here (myself included); and that, I think, is what's behind that 28.6% statistic.
[1] Well, trivial given the background that we, as Lesswrongians who have read and understood the Sequences, are assumed to have.
I haven't watched that TED talk (though I've read some of Peter Singer's writings); I will do that tomorrow.
Ok, I've watched Singer's TED talk now, thank you for linking it. It does work as a statement of purpose, certainly. On the other hand it fails as an attempt to justify or argue for the movement's core values; at the same time, it makes it quite clear that effective altruism is not just about "let's be altruists effectively". It's got some specific values attached, more specific than can justifiably be called simply "altruism".
I want to see, at least, some acknowledgment of that fact, and preferably, some attempt to defend those values. Singer doesn't do this; he merely handwaves in the general direction of "empathy" and "a rational understanding of our situation" (note that he doesn't explain what makes this particular set of values — valuing all lives equally — "rational").
Edit: My apologies! I just looked over your post again, and noticed this line, which my brain somehow ignored at first:
That (in fact, that whole paragraph) does go far toward addressing my concerns. Consider the objections in this comment at least partially withdrawn!
Apology accepted :-). (Don't worry, I know that my post was long and that catching everything can require a lot of energy.)
My impression is that a lack of compartmentalization is a risk factor for both LW and EA group membership.
My impression is also that it is a risk factor for religious mania.
Lack of compartmentalization, also called taking ideas seriously, when applied to religious ideas, gives you religious mania. Applied to various types of collective utilitarianism, can produce various anything from EA to antinatalism, from tithing to giving away all that you have. Applied to what it actually takes to find out how the world works, gives you Science.
Whether it's a good thing or a bad thing depends on what's in the compartments.
Also on how conflicts are resolved.
Yes, this is a good point that I was semi-conscious of, but not sufficiently salient so that it occurred to me explicitly while writing my post.
A couple of points:
(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant. For example you say
Actually, consequentialism follows independently of "others have non zero value." Hence, classic utilitarianism's axiomatic call to maximize the good for the greatest number is dubious. Obviously, this principle is a damn fine heuristic, but it follows from consequentialism (as long as the social contract can be inferred to be useful), and isn't a foundation for it. The paper-clipping robot is still a consequentialist.
(2) Your described principle of indifference seems to me to be manifestly false.
When we talk of the value of any thing, we are not talking of an intrinsic property of the thing, but a property of the relationship between the thing and the entity holding the value. (People are also things. ) If an entity holds any value in some object, the object must exhibit some causal effect on the entity. The nature and magnitude of the value held must be consequences of that causality. Thus, we must expect value to scale (in an order-reversing way) with some generalized measure of proximity, or causal connectedness. It is not rational for me to care as much about somebody outside my observable universe as I do about a member of my family.
I didn't quite have in mind classical utilitarianism in mind. I had in mind principles like
I did mean after controlling for ability to have an impact.
Strikes me as a bit like saying "once we forget about all the differences, everything is the same." Is there a valid purpose to this indifference principle?
Don't get me wrong, I can see that quasi-general principles of equality are worth establishing and defending, but here we are usually talking about something like equality in the eyes of the state, ie equality of all people, in the collective eyes of all people, which has a (different) sound basis.
If you actually did some kind of expected value calculation, with your utility function set to something like
U(thing) = u(thing) / causal-distance(thing), you would end up double-counting "ability to have an impact", because there is already a1/causal-distancesort of factor inE(U|action) = sum { U(thing') P(thing' | action) }built into how much each action affects the probabilities of the different outcomes (which is basically what "ability to have an impact" is).That's assuming that what JonahSinick meant by "ability to have an impact" was the impact of the agent upon the thing being valued. But it sounds like you might have been talking about the effect of
thingupon the agent? As if all you can value about something is any observable effect that thing can have on yourself (which is not an uncontroversial opinion)?Note that I wasn't arguing that it's rational. See the quotation in this comment. Rather, I was describing an input into effective altruist thinking.
Thank you for bringing this up. I've found myself having to point out this distinction (between consequentialism and utilitarianism) a number of times; it seems a commonplace confusion around here.
I see Sniffnoy also raised the same point.
In my understanding of things rationality does not involve values and altruism is all about values. They are orthogonal.
LW as a community (for various historical reasons) has a mix of rationalists and effective altruists. That's a characteristic of this particular community, not a feature of either rationalism or EA.
Effective altruism isn't just being extra super altruistic, though. EA as currently practiced presupposes certain values, but its main insight isn't value-driven: it's that you can apply certain quantification techniques toward figuring out how to optimally implement your value system. For example, if an animal rights meta-charity used GiveWell-inspired methods to recommend groups advocating for veganism or protesting factory farming or rescuing kittens or something, I'd call that a form of EA despite the differences between its conception of utility and GiveWell's.
Seen through that lens, effective altruism seems to have a lot more in common with LW-style rationality than, say, a preference for malaria eradication does by itself. We don't dictate values, and (social pressure aside) we probably can't talk people into EA if their value structure isn't compatible with it, but we might easily make readers with the right value assumptions more open to quantified or counterintuitive methods.
Yes, of course.
So does effective proselytizing, for example. Or effective political propaganda.
Take away the "presupposed values" and all you are left with is effectiveness.
Yes, LW exposure might easily dispose readers with those inclinations toward EA-like forms of political or religious advocacy, and if that's all they're doing then I wouldn't call them effective altruists (though only because politics and religion are not generally considered forms of altruism). That doesn't seem terribly relevant, though. Politics and religion are usually compatible with altruism, and nothing about effective altruism requires devotion solely to GiveWell-approved causes.
I'm really not sure what you're trying to demonstrate here. Some people have values incompatible with EA's assumptions? That's true, but it only establishes the orthogonality of LW ideas with EA if everyone with compatible values was already an effective altruist, and that almost certainly isn't the case. As far as I can tell there's plenty of room for optimization.
(It does establish an upper bound, but EA's market penetration, even after any possible LW influence, is nowhere near it.)
That rationality and altruism are orthogonal. That effective altruism is predominantly altruism and "effective" plays a second fiddle to it. That rationality does not imply altruism (in case you think it's a strawman, tom_cr seems to claim exactly that).
If effective altruism was predominantly just altruism, we wouldn't be seeing the kind of criticism of it from a traditionally philanthropic perspective that we have been. I see this as strong evidence that it's something distinct, and therefore that it makes sense to talk about something like LW rationality methods bolstering it despite rationality's silence on pure questions of values.
Yes, it's just [a method of quantifying] effectiveness. But effectiveness in this context, approached in this particular manner, is more significant -- and, perhaps more importantly, a lot less intuitive -- than I think you're giving it credit for.
I don't know about that. First, EA is competition for the limited resource, the donors' money, and even worse, EA keeps on telling others that they are doing it wrong. Second, the idea that charity money should be spend in effective ways is pretty uncontroversial. I suspect (that's my prior adjustable by evidence) that most of the criticism is aimed at specific recommendations of GiveWell and others, not at the concept of being getting more bang for your buck.
Take a look at Bill Gates. He is explicitly concerned with the effectiveness and impact of his charity spending -- to the degree that he decided to bypass most established nonprofits and set up his own operation. Is he a "traditional" or an "effective" altruist? I don't know.
Yes, I grant you that. Traditional charity tends to rely on purely emotional appeals. But I don't know if that's enough to push EA into a separate category of its own.
Rationality itself does not involve values. But a human who learns rationality already has those values, and rationality can help them understand those values better, decompartmentalize, and optimize more efficiently.
So? Let's say I value cleansing the Earth of untermenschen. Rationality can indeed help me achieve my goals and "optimize more efficiently". Once you start associating rationality with sets of values, I don't see how can you associate it with only "nice" values like altruism, but not "bad" ones like genocide.
Maybe, but at least they'll be campaigning for mandatory genetic screening for genetic disorders rather than kill people of some arbitrary ethnicity they happened to fixate on.
Because there's a large set of "nice" values that most of humanity shares.
Along with a large set of "not so nice" values that most of humanity shares as well. A glance at history should suffice to demonstrate that.
I think one of the lessons from history is that we can still massacre each other even when everyone is acting in good faith.
I think this needs to differentiated further or partly corrected:
Cognitive biases which improve individual fitness by needing less resources, i.e. heuristics which arrive at the same or almost equally good result but without less resources. Reducing time and energy thus benefits the individual. Example:
Cognitive biases which improve individual fitness by avoiding dangerous parts of life space. Examples: Risk aversion, status-quo bias (in a way this is a more abstract for of the basic fears like fear of heigh or spiders which also avoid dangerous situations (or help getting out of them quickly)).
Cognitive biases which improve individual fitness by increasing likelihood of reproductive success. These are probably the most complex and intricately connected to emotions. In a way emotions are comparable to biases or at least trigger specific biases. For example infatuation does activate powerful biases regarding the object of the infatuation and the situation at large: Positive thinking, confirmation bias, ...
Cognitive biases that developed which improve collective fitness (i.e. benefitting other carriers of the same gene). My first examples are all not really biases but emotions: Love toward children (your own, but also others), initial friendliness toward strangers (tit-for-tat strategy), altruism in general. An example of a real bias is the positive thinking related to children. Disregard of their faults, confirmation bias. But these are I think mostly used to rationalize ones behavior in the absence of the real explanation: You love your children and expend significant energy never to be payed back because those who do have more successful offspring.
In general I wonder how to disentangle biases from emotions. You wouldn't want to rationalize against your emotions. That will not work. And if emotions trigger/streangthen biases then suppressing biases essentially means suppressing emotion.
I think the expression of the relationship between emotions and biases is at least partly learned. It could be possible to unlearn the triggering effect of the emotions. Kind of hacking your terminal goals. The question is: If you tricked your emotions to no longer grip what it means to have them expect providing internal sensation.
Thanks for the thoughts. These points all strike me as reasonable.
Why not? Rationalizing against (unreasonable) fear seems fine to me. Rationalizing against anger looks useful. Etc., etc.
Yes. I didn't think this through to all its consequences.
It is a well-know psychological fact that humans have a quite diverse set of basic fears that appear, develop and are normally overcome (understood, limited, suppressed,...) during childhood. Dealing with your fear, comming to terms with them is indeed a normal process.
Quite a good read about this is Helping Children Overcome Fears.
Indeed, having them initially is in the most cases adaptive (I wonder whether it would be a globally net positive if we could remove fear of spiders weighing up the cost of lost time and energy due to spider fear versus the remaining dangerous cases).
The key point is that a very unspecific fear like fear of darkness is moderated into a form where it doesn't control you and where it only applies to cases that you didn't adapt to earlier (many people still freak out if put into extremely unusual situations which add (multiply?) multiple such fears). And whether having them in these cases is positive I can as best speculate on.
Nonetheless this argument that many fears are less adaptive then they used to (because civilization weeded them out) is independent of the other emotions esp. the 'positive' ones like love, empathy, happiness and curiosity which it appears also do put you into a biased state. Whould you want to get rid of these too? Which?
Humans exist in permanent "biased state". The unbiased state is the province of Mr.Spock and Mr.Data, Vulcans and androids.
I think that rationality does not get rid of biases, but rather allows you to recognize them and compensate for them. Just like with e.g. fear -- you rarely lose a particular fear altogether, you just learn to control and manage it.
You seem to mean that biases are the brains way to perceive the world in a way that focusses on the 'important' parts. Beside terminal goals which just evaluate the perception with respect to utility this acts acts as a filter but thereby also implies goals (namely the reduction of the importance of the filtered out parts).
Yes, but note that a lot of biases are universal to all humans. This means they are biological (as opposed to cultural) in nature. And this implies that the goals they developed to further are biological in nature as well. Which means that you are stuck with these goals whether you conscious mind likes it or not.
Yes. That's what I meant when I said: "You wouldn't want to rationalize against your emotions. That will not work."
If your conscious mind has goals incompatible with the effects of bioneuropsychological processes then frustrations seems the least result.
I still don't know about that. A collection of such "incompatible goals" has been described as civilization :-)
For example, things like "kill or drive away those-not-like-us" look like biologically hardwired goals to me. Having a conscious mind have its own goals incompatible with that one is probably a good thing.
Sure we have to deal with some of these inconsistencies. And for some of us this is an continuous source of frustration. But we do not have to add more to these than absolutely necessary, or?
risk aversion is not a bias.
It might or might not be. If it is coming from your utility function, it's not. If it is "extra" to the utility function it can be a bias.
I understood risk aversion to be a tendency to prefer a relatively certain payoff, to one that comes with a wider probability distribution, but has higher expectation. In which case, I would call it a bias.
It's not a bias, it's a preference. Insofar as we reserve the term bias for irrational "preferences" or tendencies or behaviors, risk aversion does not qualify.
I would call it a bias because it is irrational.
It (as I described it - my understanding of the terminology might not be standard) involves choosing an option that is not the one most likely to lead to one's goals being fulfilled (this is the definition of 'payoff', right?).
Or, as I understand it, risk aversion may amount to consistently identifying one alternative as better when there is no rational difference between them. This is also an irrational bias.
Problems with your position:
1. "goals being fulfilled" is a qualitative criterion, or perhaps a binary one. The payoffs at stake in scenarios where we talk about risk aversion are quantitative and continuous.
Given two options, of which I prefer the one with lower risk but a lower expected value, my goals may be fulfilled to some degree in both case. The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.
2. The alternatives at stake are probabilistic scenarios, i.e. each alternative is some probability distribution over some set of outcomes. The expectation of a distribution is not the only feature that differentiates distributions from each other; the form of the distribution may also be relevant.
Taking risk aversion to be irrational means that you think the form of a probability distribution is irrelevant. This is not an obviously correct claim. In fact, in Rational Choice in an Uncertain World [1], Robyn Dawes argues that the form of a probability distribution over outcomes is not irrelevant, and that it's not inherently irrational to prefer some distributions over others with the same expectation. It stands to reason (although Dawes doesn't seem to come out and say this outright, he heavily implies it) that it may also be rational to prefer one distribution to another with a lower (Edit: of course I meant "higher", whoops) expectation.
[1] pp. 159-161 in the 1988 edition, if anyone's curious enough to look this up. Extra bonus: This section of the book (chapter 8, "Subjective Expected Utility Theory", where Dawes explains VNM utility) doubles as an explanation of why my preferences do not adhere to the von Neumann-Morgenstern axioms.
Point 1:
If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn't, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled.
But risk is integral to the calculation of utility. 'Risk avoidance' and 'value' are synonyms.
Point 2:
Thanks for the reference.
But, if we are really talking about a payoff as an increased amount of utility (and not some surrogate, e.g. money), then I find it hard to see how choosing an option that it less likely to provide the payoff can be better.
If it is really safer (ie better, in expectation) to choose option 1, despite having a lower expected payoff than option 2, then is our distribution really over utility?
Perhaps you could outline Dawes' argument? I'm open to the possibility that I'm missing something.
Dawes' argument, as promised.
The context is: Dawes is explaining von Neumann and Morgenstern's axioms.
Aside: I don't know how familiar you are with the VNM utility theorem, but just in case, here's a brief primer.
The VNM utility theorem presents a set of axioms, and then says that if an agent's preferences satisfy these axioms, then we can assign any outcome a number, called its utility, written as U(x); and it will then be the case that given any two alternatives X and Y, the agent will prefer X to Y if and only if E(U(X)) > E(U(Y)). (The notation E(x) is read as "the expected value of x".) That is to say, the agent's preferences can be understood as assigning utility values to outcomes, and then preferring to have more (expected) utility rather than less (that is, preferring those alternatives which are expected to result in greater utility).
In other words, if you are an agent whose preferences adhere to the VNM axioms, then maximizing your utility will always, without exception, result in satisfying your preferences. And in yet other words, if you are such an agent, then your preferences can be understood to boil down to wanting more utility; you assign various utility values to various outcomes, and your goal is to have as much utility as possible. (Of course this need not be anything like a conscious goal; the theorem only says that a VNM-satisfying agent's preferences are equivalent to, or able to be represented as, such a utility formulation, not that the agent consciously thinks of things in terms of utility.)
(Dawes presents the axioms in terms alternatives or gambles; a formulation of the axioms directly in terms of the consequences is exactly equivalent, but not quite as elegant.)
N.B.: "Alternatives" in this usage are gambles, of the form ApB: you receive outcome A with probability p, and otherwise (i.e. with probability 1–p) you receive outcome B. (For example, your choice might be between two alternatives X and Y, where in X, with p = 0.3 you get consequence A and with p = 0.7 you get consequence B, and in Y, with p = 0.4 you get consequence A and with p = 0.6 you get consequence B.) Alternatives, by the way, can also be thought of as actions; if you take action X, the probability distribution over the outcomes is so-and-so; but if you take action Y, the probability distribution over the outcomes is different.
(If all of this is old hat to you, apologies; I didn't want to assume.)
The question is: do our preferences satisfy VNM? And: should our preferences satisfy VNM?
It is commonly said (although this is in no way entailed by the theorem!) that if your preferences don't adhere to the axioms, then they are irrational. Dawes examines each axiom, with an eye toward determining whether it's mandatory for a rational agent to satisfy that axiom.
Dawes presents seven axioms (which, as I understand it, are equivalent to the set of four listed in the wikipedia article, just with a difference in emphasis), of which the fifth is Independence.
The independence axiom says that A ≥ B (i.e., A is preferred to B) if and only if ApC ≥ BpC. In other words, if you prefer receiving cake to receiving pie, you also prefer receiving (cake with probability p and death with probability 1–p) to receiving (pie with probability p and death with probability 1–p).
Dawes examines one possible justification for violating this axiom — framing effects, or pseudocertainty — and concludes that it is irrational. (Framing is the usual explanation given for why the expressed or revealed preferences of actual humans often violate the independence axiom.) Dawes then suggests another possibility:
[5] This is Dawes' footnote; it talks about an objection to "Reaganomics" on similar grounds.
Essentially, Dawes is asking us to imagine two possible actions. Both have the same expected utility; that is, the "degree of goal satisfaction" which will result from each action, averaged appropriately across all possible outcomes of that action (weighted by probability of each outcome), is exactly equal.
But the actual probability distribution over outcomes (the form of the distrbution) is different. If you do action A, then you're quite likely to do alright, there's a reasonable chance of doing pretty well, and a small chance of doing really great. If you do action B, then you're quite likely to do pretty well, there's a reasonable chance to do ok, and a small chance of doing disastrously, ruinously badly. On average, you'll do equally well either way.
The Independence axiom dictates that we have no preference between those two actions. To prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom. Is this irrational? Dawes says no. I agree with him. Why shouldn't I prefer to avoid the chance of disaster and ruin? Consider what happens when the choice is repeated, over the course of a lifetime. Should I really not care whether I occasionally suffer horrible tragedy or not, as long as it all averages out?
But if it's really a preference — if I'm not totally indifferent — then I should also prefer less "risky" (i.e. less negatively skewed) distributions even when the expectation is lower than that of distributions with more risk (i.e. more negative skew) — so long as the difference in expectation is not too large, of course. And indeed we see such a preference not only expressed and revealed in actual humans, but enshrined in our society: it's called insurance. Purchasing insurance is an expression of exactly the preference to reduce the negative skew in the probability distribution over outcomes (and thus in the distributions of outcomes over your lifetime), at the cost of a lower expectation.
This sounds like regular risk aversion, which is normally easy to model by transforming utility by some concave function. How do you show that there's an actual violation of the independence axiom from this example? Note that the axioms require that there exist a utility function
u :: outcome -> realsuch that you maximise expected utility, not that some particular function (such as the two graphs you've drawn) actually represents your utility.In other words, you haven't really shown that "to prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom" since the two distributions don't have the form ApC, BpC with A≅B. Simply postulating that the expected utilities are the same only shows that that particular utility function is not correct, not that no valid utility function exists.
Assuming you privilege some reference point as your x-axis origin, sure. But there's no good reason to do that. It turns out that people are risk averse no matter what origin point you select (this is one of the major findings of prospect theory), and thus the concave function you apply will be different (will be in different places along the x-axis) depending on which reference point you present to a person. This is clearly irrational; this kind of "regular risk aversion" is what Dawes refers to when he talks about independence axiom violation due to framing effects, or "pseudocertainty".
The graphs are not graphs of utility functions. See the first paragraph of my post here.
Indeed they do; because, as one of the other axioms states, each outcome in an alternative may itself be an alternative; i.e, alternatives may be constructed as probability mixtures of other alternatives, which may themselves be... etc. If it's the apparent continuity of the graphs that bothers you, then you have but to zoom in on the image, and you may pretend that the pixelation you see represents discreteness of the distribution. The point stands unchanged.
The point Dawes is making, I think, is that any utility function (or at least, any utility function where the calculated utilities more or less track an intuitive notion of "personal value") will lead to this sort of preference between two such distributions. In fact, it is difficult to imagine any utility function that both corresponds to a person's preferences and doesn't lead to preferences for less negatively skewed probably distributions over outcomes, over more negatively skewed ones.
Thanks very much for the taking the time to explain this.
It seems like the argument (very crudely) is that, "if I lose this game, that's it, I won't get a chance to play again, which makes this game a bad option." If so, again, I wonder if our measure of utility has been properly calibrated.
It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, but money is not identical to utility.
Nonetheless, those exponential distributions make a very interesting argument.
I'm not entirely sure, I need to mull it over a bit more.
Thanks again, I appreciate it.
Just a brief comment: the argument is not predicated on being "kicked out" of the game. We're not assuming that even the lowest-utility outcomes cause you to no longer be able to continue "playing". We're merely saying that they are significantly worse than average.
Re: your response to point 1: again, the options in question are probability distributions over outcomes. The question is not one of your goals being 50% fulfilled or 51% fulfilled, but, e.g., a 51% probability of your goals being 100% fulfilled vs., a 95% probability of your goals being 50% fulfilled. (Numbers not significant; only intended for illustrative purposes.)
"Risk avoidance" and "value" are not synonyms. I don't know why you would say that. I suspect one or both of us is seriously misunderstanding the other.
Re: point #2: I don't have the time right now, but sometime over the next couple of days I should have some time and then I'll gladly outline Dawes' argument for you. (I'll post a sibling comment.)
If I'm talking about a goal actually being 50% fulfilled, then it is.
Really?
I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don't know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?
If I'm terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from being close to a spider is less than otherwise.
That would be very kind :) No need to hurry.
Being more rational makes rationalization harder. When confronted with thought experiments such as Peter Singer's drowning child example, it makes it harder to come up with reasons for not changing one's actions while still maintaining a self-image of being caring. While non-rationalists often object to EA by bringing up bad arguments (e.g. by not understanding expected utility theory or decision-making under uncertainty), rationalists are more likely to draw more radical conclusions. This means they might either accept the extreme conclusion that they want to be more effectively altruistic, or they accept the extreme conclusion that they don't share the premise that the thought experiment relies on, namely that they care significantly about others for their own sake. Increased rationality weeds out the "middle-ground-positions" that are kept in place by rationalizations or a simple lack of further thinking (which doesn't refer to all such positions of course).
It would be interesting to figure out the factors that determine which way the bullet will be bitten. I would predict that the vast majority of EA-rationalists have other EAs in their close social environment.
I wouldn't suggest that people's response to dilemmas like Singer's is rationalization. Rather, I'd say that people have principles but are not very good at articulating them. If they say they should save a dying child because of some principle, that "principle" is just their best attempt to approximate the actual principle that they can't articulate.
If the principle doesn't fit when applied to another case, fixing up the principle isn't rationalization; it's recognizing that the stated principle was only ever an approximation, and trying to find a better approximation. (And if the fix up is based on bad reasoning, that's just "trying to find a better approximation, and making a mistake doing so".)
It may be easier to see when not talking about saving children. If you tell me you don't like winter days, and I point out that Christmas is a winter day and you like Christmas, and you then respond "well, I meant a typical winter day, not a special one like Christmas", that's not a rationalization, that's just revising what was never a 100% accurate statement and should not have been expected to be.
Yes, this is a good point.
I think it is rationally optimal for me to not give any money away since I need all of it to pursue rationally-considered high-level goals. (Much like Eliezer probably doesn't give away money that could be used to design and build FAI --because given the very small number of people now working on the problem, and given the small number of people capable of working on the problem, that would be irrational of him). There's nothing wrong with believing in what you're doing, and believing that such a thing is optimal. ...Perhaps it is optimal. If it's not, then why do it? If money --a fungible asset-- won't help you to do it, it's likely "you're doing it wrong."
Socratic questioning helps. Asking the opposite of a statement, or its invalidation helps.
Most people I've met lack rational high-level goals, and have no prioritization schemes that hold up to even cursory questioning, therefore, they could burn their money or give it to the poor and get a better system-wide "high level" outcome than buying another piece of consumer electronics or whatever else they were going to buy for themselves. Heck, if most people had vastly more money, they'd kill themselves with it --possibly with high glycemic index carbohydrates, or heroin. Before they get to effective altruism, they have to get to rational self-interest, and disavow coercion as a "one size fits all problem solver."
Since that's not going to happen, and since most people are actively involved with worsening the plight of humanity, including many LW members, I'd suggest that a strong dose of the Hippocratic Oath prescription is in order:
First, do no harm.
Sure, the human-level tiny brains are enamored with modern equivalents of medical "blood-letting." But you're an early-adopter, and a thinker, so you don't join them. First, do no harm!
Sure, your tiny brained relatives over for Thanksgiving vote for "tough on crime" politicians. But you patiently explain jury nullification of law to them, indicating that one year prior to marijuana legalization in Colorado by the vote, marijuana was de facto legalized because prosecutors were experiencing too much jury nullification of law to save face while trying to prosecute marijuana offenders. Then, you show them Sanjay Gupta's heartbreaking video documentary about how marijuana prohibition is morally wrong.
You do what you have to to change their minds. You present ideas that challenge them, because they are human beings who need something other than a bland ocean of conformity to destruction and injustice. You help them to be better people, taking the place of "strong benevolent Friendly AI" in their lives.
In fact, for simple dualist moral decisions, the people on this board can function as FAI.
The software for the future we want is ours to evolve, and the hardware designers' to build.
Another effect: people on LW are massively more likely to describe themselves as effective altruists. My moral ideals were largely formed before I came into contact with LW, but not until I started reading was I introduced to the term "effective altruism".
The question appears to assume that LW participation is identically equal to improved rationality. Involvement in LW and involvement in EA is pretty obviously going to be correlated given they're closely related subcultures.
If this is not the case: Do you have a measure to hand of "improved rationality" that doesn't involve links to LW?
Without deliberately bringing up mind-killy things, I would have to ask, if we tie together Effective Altruism and rationality, why Effective Altruists are not socialists of some sort.
-- Understanding Rawls: A Reconstruction and Critique of a Theory of Justice, by Robert Paul Wolff
I actually picked that up on the recommendation from an LW thread to read about Rawls, but I hope the highlight gets Wolff's point across. Elsewhere, he phrases it roughly as: by default, the patterns of distribution arise directly from the patterns of production, and therefore we can say or do very little about perceived distributional problems if we are willing to change nothing at all about the underlying patterns of production producing (ahaha) the problematic effect.
Or in much simpler words: why do we engage in lengthy examinations of sending charity to people who could look after themselves just fine if we would stop robbing them of resources? The success of GiveDirectly should be causing us to reexamine the common assumption that poor people are poor for some reason other than that they lack property to capitalize for themselves.
Anyway, I'm going to don my flame-proof suit now. (And in my defense, my little giving this year so far has already included $720 to CareerVillage for advising underprivileged youth in the First World and $720 to GiveDirectly for direct transfer to the poor in the Third World. I support interventions that work!)
I'm a loyal tovarisch of Soviet Canuckistan, and I have to say that doesn't seem like a conundrum to me: there's no direct contradiction between basing one's charitable giving on evidence about charitable organizations' effectivenesses and thinking that markets in which individual are free to act will lead to more preferable outcomes than markets with state-run monopolies/monopsonies.
The whole point of the second quotation and the paragraph after that was to... Oh never mind, should I just assume henceforth that contrary to its usage in socialist discourse, to outsiders "socialism" always means state-owned monopoly? In that case, what sort of terminology should I use for actual worker control of the means of production, and such things?
"Anarcho-syndicalism" maybe? All's I know is that my socialized health insurance is a state-run oligopsony/monopoly (and so is my province's liquor control board). In any event, if direct redistribution of wealth is the key identifier of socialism, then Milton Friedman was a socialist, given his support for negative income taxes.
Prolly the best thing would be to avoid jargon as much as possible when talking to outsiders and just state what concrete policy you're talking about. For what it's worth, it seems to me that you've used the term "socialism" to refer to two different, conflated, specific policies. In the OP you seem to be talking about direct redistribution of money, which isn't necessarily equivalent to the notion of worker control of the means of production that you introduce in the parent; and the term "socialism" doesn't pick out either specific policy in my mind. (An example of how redistribution and worker ownership are not equivalent: on Paul Krugman's account, if you did direct redistribution right now, you'd increase aggregate demand but not even out ownership of capital. This is because current household consumption seems to be budget-constrained in the face of the ongoing "secular stagnation" -- if you gave poor people a whack of cash or assets right now, they'd (liquidate and) spend it on things they need rather than investing/holding it. )
Ah, here's the confusion. No, in the OP I was talking about worker control of the means of production, and criticizing Effective Altruism for attempting to fix poverty and sickness through what I consider an insufficiently effective intervention, that being direct redistribution of money.
Oh, I see. Excellent clarification.
How would you respond to (what I claim to be) Krugman's account, i.e., in current conditions poor households are budget-constrained and would, if free to do so, liquidate their ownership of the means of production for money to buy the things they need immediately? Just how much redistribution of ownership are you imagining here?
Basically, I accept that critique, but only at an engineering level. Ditto on the "how much" issue: it's engineering. Neither of these issues actually makes me believe that a welfare state strapped awkwardly on top of a fundamentally industrial-capitalist, resource-capitalist, or financial-capitalist system - and constantly under attack by anyone perceiving themselves as a put-upon well-heeled taxpayer to boot - is actually a better solution to poverty and inequality than a more thoroughly socialist system in which such inequalities and such poverty just don't happen in the first place (because they're not part of the system's utility function).
I certainly believe that we have not yet designed or located a perfect socialist system to implement. What I do note, as addendum to that, is that nobody who supports capitalism believes the status quo is a perfect capitalism, and most people who aren't fanatical ideologues don't even believe we've found a perfect capitalism yet. The lack of a preexisting design X and a proof that X Is Perfect do not preclude the existence of a better system, whether redesigned from scratch or found by hill-climbing on piecemeal reforms.
All that lack means is that we have to actually think and actually try -- which we should have been doing anyway, if we wish to act according to our profession to be rational.
Good answer. (Before this comment thread I was, and I continue to be, fairly sympathetic to these efforts.)
Thanks!
An interesting question :-D How do you define "socialism" in this context?
I would define it here as any attempt to attack economic inequality at its source by putting the direct ownership of capital goods and resultant products in the hands of workers rather than with a separate ownership class. This would thus include: cooperatives of all kinds, state socialism (at least, when a passable claim to democracy can be made), syndicalism, and also Georgism and "ecological economics" (which tend to nationalize/publicize the natural commons and their inherent rentier interest rather than factories, but the principle is similar).
A neat little slogan would be to say: you can't fix inequality through charitable redistribution, not by state sponsorship nor individual effort, but you could fix it through "predistribution" of property ownership to ensure nobody is proletarianized in the first place. (For the record: "proletarianized" means that someone lacks any means of subsistence other than wage-labor. There are very many well-paid proletarians among the Western salariat, the sort of people who read this site, but even this well-off kind of wage labor becomes extremely problematic when the broader economy shifts -- just look at how well lawyers or doctors are faring these days in many countries!)
I do agree that differences of talent, skill, education and luck in any kind of optimizing economy (so not only markets but anything else that rewards competence and punishes incompetence) will eventually lead to some inequalities on grounds of competence and many inequalities due to network effects, but I don't think this is a good ethical excuse to abandon the mission of addressing the problem at its source. It just means we have to stop pretending to be wise by pointing out the obvious problems and actually think about how to accomplish the goal effectively.
The distinction you're making with the word "proletarianized" doesn't really make sense when the market value of the relevant skills are larger than the cost of the means of production.
Owning the means of production doesn't help here since the broader economic shifts can make your factory obsolete just as easily as they can make your skills obsolete.
That doesn't work in the long term. What happens to a socialized factory when demand for the good it produces decreases? A capitalist factory would lay off some workers, but a socialized factory can't do that, so winds up making uncompetitive products. The result is that the socialized factory will eventually get out-competed by capitalist factories. If you force all factories to be socialized, this will eventually lead to economic stagnation. (By the way, this is not the only problem with socialized factories.)
Why not? Who says? Are we just automatically buying into everything the old American propagandists say now ;-)?
I've not even specified a model and you're already making unwarranted assumptions. It sounds to me like you've got a mental stop-sign.
But you did refer to real world examples.
That's a reasonable definition.
By the way, do you think that EA should tackle the issue of economic inequality, or does EA assert that itself?
I think EA very definitely targets both poverty and low quality of life, I think factual evidence shows that inequality appears to have a detectable effect on both poverty (defined even in an absolute sense: less egalitarian populations develop completely impoverished sub-populations more easily) and well-being in general (the well-being effects, surprisingly, show up all across the class spectrum), and that therefore someone who cares about optimizing away absolute poverty and optimizing for well-being should care about optimizing for the level of inequality which generates the least poverty and the most well-being.
Obviously the factual portions of this belief are subject to update, on top of my own innate preference for more egalitarian interactions (which is strong enough that it has never seemed to change). My preferences could tilt me towards seeing/acknowledging one set of evidence rather than another, towards believing that "the good is the true", but I was actually as surprised as anyone when the sociological findings showed that rich people are worse off in less-egalitarian societies.
EDIT: Here is a properly rigorous review, and here is a critique.
To make an obvious observation, targeting poverty and targeting economic inequality are very different things. It is clear that EA targets "low quality of life", but my question was whether EA people explicitly target economic inequality -- or they don't and you think they should?
Note that asserting that inequality affects absolute poverty does NOT imply that getting rid of inequality is the best method of dealing with poverty.
As far as I'm aware, EA people do not currently explicitly target economic inequality. I am attempting to claim that they have instrumental reason to shift towards doing so.
It certainly doesn't, but my explanation of capitalism also attempted to show that in this particular system, inequality and absolute poverty share a common cause. Proletarianized Nicaraguan former-peasants are absolutely poor, but their poverty was created by the same system that is churning out inequality -- or so I believe on the weight of my evidence.
If there's a joint of reality I've completely failed to cleave, please let me know.
Poverty is not created. Poverty is the default state that you do (or do not) get out of.
Historical evidence shows that capitalist societies are pretty good at getting large chunks of their population out of poverty. The same evidence shows that alternatives to capitalism are NOT good at that. The absolute poverty of Russian peasants was not created by capitalism. Neither has the absolute poverty of the Kalahari Bushmen or the Yanomami. But look at what happened to China once capitalism was allowed in.
There's evidence that capitalism plus social democracy works to increase well being. You can't infer from that that capitalism is doing all the heavy lifting.
There is evidence that capitalism without social democracy works to increase well-being. Example: contemporary China.
You're speaking in completely separate narratives to counter a highly specific scenario I had raised -- and one which has actually happened!
Here is the fairly consistent history of actually existing capitalism, in most societies: an Enclosure Movement of some sort modernizes old property structures, this proletarianizes some of the peasants or other subsistence farmers (that is, it removes them from their traditionally-inhabited land), this creates a cheap labor force which is then used in new industries but who have a lower standard of living than their immediate peasant forebears, this has in fact created both absolute poverty and inequality (compared to the previous non-capitalist system). Over time, the increased productivity raises the mean standard of living, whether or not the former-peasants get access to that new higher standard of living seems to be a function of how egalitarian society's arrangements are at that time; history appears to show that capitalism usually starts out quite radically inegalitarian but is eventually forced to make egalitarian concessions (these being known as "social democracy" or "welfare programs" in political-speak) that, after decades of struggle, finally raise the formerly-peasant now-proletarians above the peasant standard of living.
Note how there is in fact causation here: it all starts when a formerly non-industrial society chooses to modernize and industrialize by shifting from a previous (usually feudal and mostly agricultural) mode of production that involved a complicated system of inter-class concessions to mutual survival, to a new and simplified system of mass production, freehold property titles, and no inter-class concessions whatsoever. It's not natural (or rather, predestined: the past doesn't reach around the present to write the future), it's a function of choices people made, which are themselves functions of previous causes, and so on backwards in time.
All we "radicals" are actually pointing out is that if we've seen this movie before, and know how it goes (or if we're simply moral enough to reason in an acausal or even "acausal + veil of ignorance" mode about the transition), we can make different choices at any point in the history to achieve a more desirable result far more directly.
It all adds up to Genre Savvy.
I am sorry, are you claiming that there was neither inequality nor absolute poverty in pre-capitalist societies??
I don't understand what are you trying to say about causality.
Huh? You can make different choices in the present to affect the future. That's very far from "at any point in history".
So, any idea why all attempts to do so have ended pretty badly so far?