Most people aren't AI's or even programers (though the latter are fairly common on LW).
Most people also aren't presented with Omega situations. The reason it's important to solve newcomb's problem is so that we can make an AI that will respond to the incentives we give it to self-modify in ways we want it to.
From the last thread:
Meta: