Abstract: A demonstration that the philosophy Effective Altruism (hereafter “EA”), particularly its emphasis on the use of the free market to collect means then used to promote human welfare, including reducing risks to human existence (our definition of EA), is contradictory, and therefore ineffectual.

Epistemic status: Modest confidence. 

  1. EA’s efforts, are intended to decrease existential risk, among others (by definition of EA).
  2. Present prevailing social, political, and economic systems (hereafter “venal systems”) encourage existential risk, and so, are non-altruistic; as non-altruistic venal systems proliferate in overwhelming influence, existential risk will increase proportionately (by observation; present venal systems take no feasible measures against existential risk. Indeed, that such a venue as LessWrong or the Alignment Blog exist to mitigate existential risk, it follows there is existential risk, and present venal systems permit that existence, or at least do not preclude it as would make LessWrong and the Alignment Blog redundant, as they are acknowledged by users not to be).
  3. EA requires present venal systems to exist, to gain the resources from the free market that are supposed to increase human welfare, preserve human lives, and mitigate existential risk (by definition of EA; present venal systems, particularly economic, are necessary conditions for the denoting “effectiveness” of EA).
  4. The above-named present venal systems, increase the number of people who act contrarily to the ideals of EA (as they require, so promote unregulated economies, and self-interested people in order for such venal systems to exist, as noted in 2)); that is, as more lives are saved, they are statistically likely to join, thus increase the proliferation of, current venal systems which encourage existential risk. (Explication: adherents to EA are already a minority, in numbers and influence, as noted in 2), that contrary to their influence, existential risk still exists, which permits the conditions noted in 2); the likelihood of disproportionate influence decreases at such lower population levels.) 

(Proof of continued minority, by cases: assume that a fixed proportion of human lives saved by EA efforts, thus still a minority, become Effective Altruists; in which case as human lives are saved, the majority will participate in existing venal systems, so such systems and their existential risks will proliferate proportionately. Alternatively, that EA’s will convince a majority of humans saved to be altruistic; but this is contrary to the observation above, and of 2), that EA’s are a population- and influence-minority. By observation too there has yet to be an argument or effort by EA’s that have made their philosophy an influence-majority over non-altruistic venal systems, which have not been supplanted. Nor could there exist such an argument, in order for EA to exist, which relies on preponderance of venal systems for support, by 3). Current venal systems win majorities without need for arguments – as there are none, any such argument being contradictory, the venal systems being self-contradictory in destruction of venality from existential risks. Consider the final case, in which of the lives saved by EA, a smaller proportion become EAs than already were so – but again there are fewer EAs, and more participants in venal, existentially risky, systems).

From 1)-4), EA’s efforts increase population; increased population increases proliferation of non-altruistic venal systems; proliferating venal systems increases existential risk; by the hypothetical syllogism, EA’s efforts increase existential risk, contrary to the definition of EA in 1).

Therefore EA is contradictory, and therefore ineffectual (as was to be demonstrated). 

(Libertarianism likewise permits the free-market malfeasance and existential risk that will result in no one left for liberties).

That this argument can exist, it follows there can be no reasoning by EA that will engender majority support, because this argument precludes any support of EA (though this threatens circular reasoning; still, as noted in proof-by-cases, any successful argument for majority support of EA, destroys EA’s economic supports).

Besides which, the consequentialist or utilitarian ethic underlying EA was subject to the Dread Conclusion, so was not of itself adequate. And, that human welfare is important, is a human bias. Nonhuman animals would not highly rank human welfare; only because their welfare is unconsidered, is human welfare well-regarded.

By the terms of the proposed ethic of “Going-On,” described in this author’s foregoing article, EA is more morally right than the present existential-risk-inviting systems. However, the above-noted ethics upon which EA relies, entail a solution of the AI values-alignment problem, by an “anthropocentric-alignment” approach which, as author’s foregoing article demonstrates preliminarily (and a subsequent article, by vector space analysis, shall demonstrate conclusively) is impossible. To the “anthropocentric-alignment” error does this author attribute the failure of all foregoing attempts to “solve” alignment; continued erroneous efforts would result in the destruction of the only as-yet known sentient species.

Regrettably, the ethic of “Going-On” requires that eventuality to be forestalled (absent another entity capable of all possible effective procedures); and it requires those who know it, (and thus must act in obedience to it, that they act at all) to undertake ceaselessly to disseminate the ethic, and the pro-action possibilities that it enables and requires. 

It is uncredible Going-On is inexplicable; one belatedly realizes that it is ignored, as most seeking alignment thought EA correct, thus other ethics redundant. Accordingly it was necessary to nullify EA, with regret tempered by the hope Going-On – or a more effective, subsequent ethic – will be adequate to preserve universal consistency and discovery (that is, per Going-On, in the on-going effort to find something more than survival).

Please do not vote without an explanatory comment (votes are convenient for moderators, but are poor intellectual etiquette, sans information that would permit the “updating” of beliefs).

New Comment
3 comments, sorted by Click to highlight new comments since:
[-]rsaarelm1015

Please do not vote without an explanatory comment (votes are convenient for moderators, but are poor intellectual etiquette, sans information that would permit the “updating” of beliefs).

This post has terrible writing style, based on your posting history you've been here for a year, writing similarly badly styled posts, people have commented on the style, and you have neither engaged the comments nor tried to improve your writing style. Why shouldn't people just downvote and move on at this point?

[-]Dagon86

Please do not vote without an explanatory comment

I downvoted because I don't think it's useful or interesting.  I suspect it's incorrect as well, but the style is offputting enough that I don't intend to analyze it.

Commenting before voting as requested.

After reading this several times, I think the point being made here can broadly be summed up as:

Capitalism is bad because it relies on self-interest (why?), and the size of the bad is measured by the amount of people involved in it (why?). Helping people means they're more likely to both reproduce and be grateful to capitalism in a way that makes them want to preserve the status quo, ergo we ought not help people, because if we do, we will create more capitalist sycophants. 

If I've misunderstood you, then it's because you aren't writing simply

If I haven't misunderstood you, then I find the lack of a suggested alternative irreparably damaging to the claim made.