Clarity comments on Taking Effective Altruism Seriously - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (122)
I too have the impression that for the most part the scope of the "effective" in EA refers to "... within the Overton window". There's the occasional stray 'radical solution', but usually not much beyond "let's judge which of these existing charities (all of which are perfectly societally acceptable) are the most effective".
Now there are two broad categories to explain that:
a) Effective altruists want immediate or at least intermediate results / being associated with "crazy" initiatives could mean collateral damage to their efforts / changing the Overton window to accommodate actually effective methods would be too daunting a task / "let's be realistic", etc.
b) Effective altruists don't want to upset their own System 1 sensibilities, their altruistic efforts would lose some of the fuzzies driving them if they needed to justify "mass sterilisation of third world countries" to themselves.
Solutions to optimization problems tend to set to extreme values all those variables which aren't explicitly constrained. The question then is which ideals we're willing to sacrifice in order to achieve our primary goals.
As an example, would we really rather have people decide just how many children they want to to create, only to see those children perish in the resulting population explosion? Will we influence that decisions only based on "provide better education, then hope for the best", in effect preferring starving families with the choice to procreate whenever to non-starving families without said choice?
I do believe it would be disastrous for EA as a movement to be associated with ideas too far outside the Overton window, and that is a tragedy, because it massively restricts EA's maximum effectiveness.
You're assuming that system 1 sensibilities aren't a useful heuristic for evaluating what's effective given finite evaluation resources.