The currently unsolvable problem with the ethical branch of consequentialism and its subtype utilitarianism, on the basis of which the whole concept of effective altruism and the theory of rational agents are built, is that since the world is an extremely complex tangle of interwoven systems, you cannot predict the consequences of any change even a few steps ahead. Not only because you can't adequately model them, but also because any measurement error is enough to make it impossible to predict even the 0/1 (true/false) status of these systems and their parts after very few steps. So to assume that you can estimate the maximal utility of any action by any criteria over any considerable period of time is an overconfidence that is characteristic of many of those who consider themselves rationalists. 

This is not to say that one should do nothing. It is to say that it is necessary to act, and to plan action, on principles other than consequentialism or utilitarianism. 

And to be a little more humble. Less wrong, you know. 

New Comment
1 comment, sorted by Click to highlight new comments since:
[-]TAG10

That's a well known problem, although Rationalists might not be taking enough notice of it