'Altruism' for me doesn't mean 'I assign infinite value to my own happiness (and freedom, beauty, etc.) and 0 to others', but everyone would be better off (myself included) if I sacrificed my own happiness for others'. So I'll sacrifice my own happiness for others'.' Rather, I assign some value to my own happiness, but a lot more value to others' happiness. I care unconditionally about others' happiness.
Since it's only a Prisoner's Dilemma if I value 'I defect, you cooperate' over 'we both cooperate', for me high-stakes 'defecting' would mean directly indulging in my desire to help others, while 'cooperating' via UDT would mean sacrificing humanity's welfare in some small way in order to keep a non-utilitarian agent from doing even more to reduce humanity's welfare. The structure of the PD has nothing to do with whether the agents are selfish vs. altruistic (as long as you take that into account when initially calculating payoffs).
Thought experiments like Singer's are how I found out that I do in fact terminally value people who are distant from me in space (and time). My behavior isn't perfectly utilitarian, but I'd take a pill to become more so, so my revealed preferences aren't what I'd prefer them to be.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.