These things are tricky. Humans are good at self-deception, so it is easy to simply do whatever is convenient for me, and invent a story about how doing this is by coincidence also the best way to help everyone else.
("Why would I send money to fight malaria? If I buy the latest iPhone instead, I am pretty sure some of those components are made in Africa, or at least some minerals are mined there, so I am creating jobs for people who can then spend the extra income on anti-malaria nets. This is even better, because their income is sustainable." Ignoring the fact that if I send $1000 to effective charity, it means $1000 worth of anti-malaria nets, while spending $1000 on iPhone means that less than $1 ends in hands of someone who would need such net.)
On the other hand, reversed selfishness is not philanthropy. Focusing on not having any personal benefit means avoiding all win/win solutions; which is really bad, because these are likely more sustainable than the alternatives. This is about signaling virtue, perhaps to oneself. (By choosing the option that gives you no personal benefit, you send a costly signal that you are not motivated by the personal benefit in the first place.)
If you can't trust yourself, perhaps you should seek opinion of the people whose opinion you respect. Yes, even that has the same problem on a higher level -- depending on which conclusion you want to reach, those people you will be motivated to ask -- but at least it is not under your direct control; they may surprise you.
But ultimately, I think the answer is: do the best option that is sustainable for you. You may need some experimenting to find out what exactly it is. Also the answer may change later.
Sometimes it seems consequentially correct to do things that would also be good for you, if you were selfish. For instance, to save your money instead of giving it away this year, or to get yourself a really nice house that you expect will pay off pragmatically while also being delightful to live in.
Some people are hesitant to do such things, and prefer for instance to keep a habit of donating every year, or err toward sparse accommodation more than seems optimal on the object level. I think because if their behavior is indistinguishable from selfishness, it is hard for them to be sure themselves that they aren’t drifting into selfishness. Not that selfishness would be bad if the optimal behavior was in fact the selfish one, but the worry is that if a selfishness-identical conclusion will bring them great personal gains, then they will tend toward concluding it even if they should not have.
This all makes sense, but there is something about it that I don’t like. It seems good to be able be coherent and curious and strategic and to believe in yourself and what you are doing in ways that I think this is at odds with. For instance, under this kind of arrangement you don’t get to have a solid position on ‘is this house worth having?’. You have your object level reasoning, and then not even a meta-level reason to adjust it, but a meta-level reason to distrust your whole thinking process, which leaves you in the vague epistemic state of not allowed to have certain conclusions on the house at all, or allowed to have them but not act on them. And having views but not acting on them is a weird state, because you are knowingly doing what is worse for the broader world, out of misalignment with yourself. And all this is to fend off the possibility that your motives are actually bad, or will become bad. I kind of want to say, ‘if your motives are bad, maybe you should just go and do something bad instead of rigging up some complicated process to thwart yourself’, but presumably there is some complicated relationship between bad and good parts of you that are trying to negotiate some kind of arrangement here. And maybe that is the way it must be, for you to do good. But it sounds suffocating and enfeebling.
On my preferred way of living, you do notice if you seem too excited about living in a nice house. But if you think you might have ‘the wrong values’ you address that problem head on, by object level inquiry into what your values are and what you think they ‘should be’. If you think you might be engaging in self-deception, you try to work out if that is true, and why, and stop it, rather than building a system that lets you move money through under the assumption that you are self-deceiving.
Relatedly, I think people sometimes donate to causes they don’t work on, though their position is that the one they work on is better, or hesitate to spend the amounts of money implied by their usual evaluations on improving something in their usual line of work, out of a modest sense that they might be biased about their choice of work, and that money could really save lives for instance. On my preferred way of living, if you suspect that you are biased about your choice of cause to work in such that money is better spent on a different one, you sit down and figure that out and don’t waste your career, not just send your Christmas donation somewhere else and then get back to work.
This all takes effort though, and won’t be perfect, and mileages vary, and everyone must do their best with whatever state of psychological mess they find themselves in. So quite possibly the ‘avoid non-sacrifice’ methods are better for some people.
But having to be this kind of creature, that can’t treat itself as an agent, that isn’t allowed certain beliefs, that second guesses itself and fears parts of itself and ties itself up to thwart them, seems like quite a cost, so I don’t think such strategies should be taken up by default or casually.
This is all my sense, but I haven’t spent huge amounts of time thinking about it (e.g. note my own position is pretty vague), and may come around pretty easily.