Once upon a time, I met someone who proclaimed himself to be purely selfish, and told me that I should be purely selfish as well. I was feeling mischievous(*) that day, so I said, "I've observed that with most religious people, at least the ones I meet, it doesn't matter much what their religion says, because whatever they want to do, they can find a religious reason for it. Their religion says they should stone unbelievers, but they want to be nice to people, so they find a religious justification for that instead. It looks to me like when people espouse a philosophy of selfishness, it has no effect on their behavior, because whenever they want to be nice to people, they can rationalize it in selfish terms."
And the one said, "I don't think that's true."
I said, "If you're genuinely selfish, then why do you want me to be selfish too? Doesn't that make you concerned for my welfare? Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"
The one replied: "Well, if you become selfish, then you'll realize that it's in your rational self-interest to play a productive role in the economy, instead of, for example, passing laws that infringe on my private property."
And I said, "But I'm a small-L libertarian already, so I'm not going to support those laws. And since I conceive of myself as an altruist, I've taken a job that I expect to benefit a lot of people, including you, instead of a job that pays more. Would you really benefit more from me if I became selfish? Besides, is trying to persuade me to be selfish the most selfish thing you could be doing? Aren't there other things you could do with your time that would bring much more direct benefits? But what I really want to know is this: Did you start out by thinking that you wanted to be selfish, and then decide this was the most selfish thing you could possibly do? Or did you start out by wanting to convert others to selfishness, then look for ways to rationalize that as self-benefiting?"
And the one said, "You may be right about that last part," so I marked him down as intelligent.
(*) Other mischievous questions to ask self-proclaimed Selfishes: "Would you sacrifice your own life to save the entire human species?" (If they notice that their own life is strictly included within the human species, you can specify that they can choose between dying immediately to save the Earth, or living in comfort for one more year and then dying along with Earth.) Or, taking into account that scope insensitivity leads many people to be more concerned over one life than the Earth, "If you had to choose one event or the other, would you rather that you stubbed your toe, or that the stranger standing near the wall there gets horribly tortured for fifty years?" (If they say that they'd be emotionally disturbed by knowing, specify that they won't know about the torture.) "Would you steal a thousand dollars from Bill Gates if you could be guaranteed that neither he nor anyone else would ever find out about it?" (Selfish libertarians only.)
Taking a cue from some earlier writing by Eli, I suppose one way to give ethical systems a functional test is to imagine having access to a genie. An altruist might ask the genie to maximize the amount of happiness in the universe or something like that, in which case the genie might create a huge number of wireheads. This seems to me like a bad outcome, and would likely be seen as a bad outcome by the altruist who made the request of the genie. A selfish person might say to the genie "create the scenario I most want/approve of." Then it would be impossible for the genie to carry out some horrible scenario the selfish person doesn't want. For this reason selfishness wins some points in my book. If the selfish person wants the desires of others to be met (as many people do), I, as an innocent bystander, might end up with a scenario that I approve of too. (I think the only way to improve upon this is if the person addressing the genie has the desire to want things which they would want if they had an unlimited amount of time and intelligence to think about it. I believe Eli calls this "external reference semantics.")
It seems like this is based more on the person's ability to optimize. The altruistic person who realized this flaw would then be able to (assuming s/he had the intelligence and rationality to do so) calculate the best possible wish to benefit the most number of people.