Once upon a time, I met someone who proclaimed himself to be purely selfish, and told me that I should be purely selfish as well. I was feeling mischievous(*) that day, so I said, "I've observed that with most religious people, at least the ones I meet, it doesn't matter much what their religion says, because whatever they want to do, they can find a religious reason for it. Their religion says they should stone unbelievers, but they want to be nice to people, so they find a religious justification for that instead. It looks to me like when people espouse a philosophy of selfishness, it has no effect on their behavior, because whenever they want to be nice to people, they can rationalize it in selfish terms."
And the one said, "I don't think that's true."
I said, "If you're genuinely selfish, then why do you want me to be selfish too? Doesn't that make you concerned for my welfare? Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"
The one replied: "Well, if you become selfish, then you'll realize that it's in your rational self-interest to play a productive role in the economy, instead of, for example, passing laws that infringe on my private property."
And I said, "But I'm a small-L libertarian already, so I'm not going to support those laws. And since I conceive of myself as an altruist, I've taken a job that I expect to benefit a lot of people, including you, instead of a job that pays more. Would you really benefit more from me if I became selfish? Besides, is trying to persuade me to be selfish the most selfish thing you could be doing? Aren't there other things you could do with your time that would bring much more direct benefits? But what I really want to know is this: Did you start out by thinking that you wanted to be selfish, and then decide this was the most selfish thing you could possibly do? Or did you start out by wanting to convert others to selfishness, then look for ways to rationalize that as self-benefiting?"
And the one said, "You may be right about that last part," so I marked him down as intelligent.
(*) Other mischievous questions to ask self-proclaimed Selfishes: "Would you sacrifice your own life to save the entire human species?" (If they notice that their own life is strictly included within the human species, you can specify that they can choose between dying immediately to save the Earth, or living in comfort for one more year and then dying along with Earth.) Or, taking into account that scope insensitivity leads many people to be more concerned over one life than the Earth, "If you had to choose one event or the other, would you rather that you stubbed your toe, or that the stranger standing near the wall there gets horribly tortured for fifty years?" (If they say that they'd be emotionally disturbed by knowing, specify that they won't know about the torture.) "Would you steal a thousand dollars from Bill Gates if you could be guaranteed that neither he nor anyone else would ever find out about it?" (Selfish libertarians only.)
Obviously Eliezer thinks that the people who agree with the arguments that convince him are intelligent. Valuing people who can show your cherished arguments to be wrong is very nearly a post-human trait - it is extraordinarily rare among humans, and even then unevenly manifested.
On the other hand, if we are truly dedicated to overcoming bias, then we should value such people even more highly than those whom we can convince to question or abandon their cherished (but wrong) arguments/beliefs.
The problem is figuring out who those people are.
But it's very difficult. If someone can correctly argue me out of an incorrect position, then they must understand the question better than I do, which makes it difficult or impossible for me to judge their information. Maybe they just swindled me, and my initial naive interpretation is really correct, while their argument has a serious flaw that someone more schooled than I would recognize?
So I'm forced to judge heuristically by signs of who can be trusted.
I tentatively believe that a strong sign of a person who can help me revise my beliefs is a person who is willing to revise their beliefs in the face of argument.
Eliezer's descriptions of his intellectual history and past mistakes are very convincing positive signals to me. The occasional mockery and disdain for those who disagree is a bit of a negative signal.
But this comment here is not a negative signal at all, for me. Why? Because even if Eliezer was wrong, the other party's willingness to reexamine is a strong signal of intelligence. Confirmation bias is so strong, that the willingness to act against it is of great value, even if this sometimes leads to greater error. A limited, faulty error correction mechanism (with some positive average value) is dramatically better than no error correction mechanism in the long run.
So yes, if I can (honestly) convince a person to question something that they previously deeply held, that is a sign of intelligence on their part. Agreeing with me is not the signal. Changing their mind is the signal.
It would be a troubling sign for me if there were no one who could convince me to change any of my deeply held beliefs.