Here is a simplified version of the Doomsday argument in Anthropic decision theory, to get easier intuitions.

Assume a single agent A exists, an average utilitarian, with utility linear in money. Their species survives with 50% probability; denote this event by S. If the species survives, there will be 100 people total; otherwise the average utilitarian is the only one of its kind. An independent coin lands heads with 50% probability; denote this event by H.

Agent A must price a coupon C_{S} that pays out €1 on S, and a coupon C_{H} that pays out €1 on H. The coupon C_{S} pays out only on S, thus the reward only exists in a world where there are a hundred people, thus if S happens, the coupon C_{S} is worth (€1)/100. Hence its expected worth is (€1)/200=(€2)/400.

But H is independent of S, so (H,S) and (H,¬S) both have probability 25%. In (H,S), there are a hundred people, so C_{H} is worth (€1)/100. In (H,¬S), there is one person, so C_{H} is worth (€1)/1=€1. Thus the expected value of C_{H} is (€1)/4+(€1)/400 = (€101)/400. This is more than 50 times the value of C_{S}.

Note that C_{¬S}, the coupon that pays out on doom, has an even higher expected value of (€1)/2=(€200)/400.

So, H and S have identical probability, but A assigns C_{S} and C_{H} different expected utilities, with a higher value to C_{H}, simply because S is correlated with survival and H is independent of it (and A assigns an ever higher value to C_{¬S}, which is anti-correlated with survival). This is a phrasing of the Doomsday Argument in ADT.

Would it be correct to define selfish utility as sociopathic?

*0 points [-]The problem with selfish utility, is that even selfish agents are assumed to care about themselves at different moments in time. In a world where copying happens, this is under defined, so selfish has multiple possible definitions.