You can have some fun with people whose anticipations get out of sync with what they believe they believe.
I was once at a dinner party, trying to explain to a man what I did for a living, when he said: "I don't believe Artificial Intelligence is possible because only God can make a soul."
At this point I must have been divinely inspired, because I instantly responded: "You mean if I can make an Artificial Intelligence, it proves your religion is false?"
He said, "What?"
I said, "Well, if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false. Either your religion allows that it might be possible for me to build an AI; or, if I build an AI, that disproves your religion."
There was a pause, as the one realized he had just made his hypothesis vulnerable to falsification, and then he said, "Well, I didn't mean that you couldn't make an intelligence, just that it couldn't be emotional in the same way we are."
I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong."
He said, "Well, um, I guess we may have to agree to disagree on this."
I said: "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong."
We went back and forth on this briefly. Finally, he said, "Well, I guess I was really trying to say that I don't think you can make something eternal."
I said, "Well, I don't think so either! I'm glad we were able to reach agreement on this, as Aumann's Agreement Theorem requires." I stretched out my hand, and he shook it, and then he wandered away.
A woman who had stood nearby, listening to the conversation, said to me gravely, "That was beautiful."
"Thank you very much," I said.
Part of the sequence Mysterious Answers to Mysterious Questions
Next post: "Professing and Cheering"
Previous post: "Belief in Belief"
If I were the host I would not like it if one of my guests tried to end a conversation with "We'll have to agree to disagree" and the other guest continued with "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree." In my book this is obnoxious behavior.
Having fun at someone else's expense is one thing, but holding it up in an early core sequences post as a good thing to do is another. Given that we direct new Less Wrong readers to the core sequence posts, I think they indicate what the spirit of the community is about. And I don't like seeing the community branded as being about how to show off or how to embarrass people who aren't as rational as you.
What gave me an icky feeling about this conversation is that Eliezer didn't seem to really be aiming to bring the man round to what he saw as a more accurate viewpoint. If you've read Eliezer being persuasive, you'll know that this was not it. He seemed more interested in proving that the man's statement was wrong. It's a good thing to learn to lose graciously when they're wrong, and learn from the experience. But that's not something you can force someone to learn from the outside. I don't think the other man walked away from this experience improved, and I don't think that was Eliezer's goal.
I, like you, love a good argument with someone who also enjoys it. But to continue arguing with someone who's not enjoying it feels sadistic to me.
If I were in this conversation, I would try to frame it as a mutual exploration rather than a mission to discover which of us was wrong. At the point where the other tried to shut down the conversation, I might say, "Wait, I think we were getting to something interesting, and I want to understand what you meant when you said..." Then proceed to poke holes, but in a curious rather than professorial way.
I'd find it especially obnoxious because Aumann's agreement theorem looks to me like one of those theorems that just doesn't do what people want it to do, and so ends up as a rhetorical c... (read more)