I don’t see any mention of rule consequentialism in this post.
Is your idea just rule consequentialism?
It is, but I'm specifically saying a form of rule consequentialism that serves personal happiness about as well as it could be served is in fact rational (for anyone who is trying to maximize impersonal happiness and probably for anyone who is a consequentialist of any kind).
One thing I believe happens a lot to effective altruists, who are generally (all?) consequentialists of some sort, is that their rational calculus leads them to efface themselves or lead a life much more miserable than they would've if they had spent less time thinking about their altruism. I think EA men and women are at the final frontier of consideration of the link between rational self-interest and true altruism, where their desire to do good from the point of view of the universe is leading them into great punishment and one would have difficulty convincing them the aforementioned link is strong. Nonetheless, I believe it is strong, and would like to make the case that a comprehensive set of deontological beliefs, akin to a religious morality, would objectively serve consequentialist ends better than the reliance on explicit reasoning about outcomes that is central to EA.
Here's my argument:
Say you're a utilitarian who wants to maximize the happiness of all beings throughout all of time and minimize their suffering. It's tempting to think of practical things you might do in this direction, but I think first you should clarify the scope of your intention. You are a finite being within a reality that far surpasses your knowledge, and may do so endlessly, and you would like to live your life towards an as-good-as infinite end. Whatever theory of decision making you subscribe to, and whatever metaphysics you have, the only way to safely act in a situation like that is predictably. The amount of information you would need to have to have a high probability of following you morality exactly is too great to make all but the most minor and inconsequential decisions. If you care about your morality you should be acting as conservatively as possible.
You want to tilt the balance in favor of your utilitarian morality, so the question becomes "what is the least consequential thing I can do that is likeliest to promote happiness and minimize suffering?"
In other words you want to perform the moral good that is nearest at hand.
This means you start with your own mind, and moving as gently as possible befriend it and create within it genuine contentment. All of your rational behavior after that is captured within the concept of emotional regulation. If you develop a set of internally consistent rules to follow for the various kinds of events you can guess you may face in life you will have something to ground yourself with and thereby regulate your emotions as effectively as you can.
That's the essence of effective altruism.