You can have some fun with people whose anticipations get out of sync with what they believe they believe.
I was once at a dinner party, trying to explain to a man what I did for a living, when he said: "I don't believe Artificial Intelligence is possible because only God can make a soul."
At this point I must have been divinely inspired, because I instantly responded: "You mean if I can make an Artificial Intelligence, it proves your religion is false?"
He said, "What?"
I said, "Well, if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false. Either your religion allows that it might be possible for me to build an AI; or, if I build an AI, that disproves your religion."
There was a pause, as the one realized he had just made his hypothesis vulnerable to falsification, and then he said, "Well, I didn't mean that you couldn't make an intelligence, just that it couldn't be emotional in the same way we are."
I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong."
He said, "Well, um, I guess we may have to agree to disagree on this."
I said: "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong."
We went back and forth on this briefly. Finally, he said, "Well, I guess I was really trying to say that I don't think you can make something eternal."
I said, "Well, I don't think so either! I'm glad we were able to reach agreement on this, as Aumann's Agreement Theorem requires." I stretched out my hand, and he shook it, and then he wandered away.
A woman who had stood nearby, listening to the conversation, said to me gravely, "That was beautiful."
"Thank you very much," I said.
Part of the sequence Mysterious Answers to Mysterious Questions
Next post: "Professing and Cheering"
Previous post: "Belief in Belief"
Before I say anything I would like to mention that this is my first post on LW, and being only part way through the sequences I am hesitant to comment yet, but I am curious about your type of position.
What I find peculiar about your position is the fact that Yudkowsky did not, as he presented here, engage the argument. The other person did, asserting "only God can make a soul", implying that Yudkowsky's profession is impossible or nonsensical. Vocalizing any type of assertion, in my opinion, should be viewed as a two-way street, letting potential criticism come. In this particular example the assertion was of a subject that the man knew would be of large interest to Yudkowsky, certainly disproportionately more than say whether or not the punch being served had mango juice in it.
I'd like to know what you expect Yudkowsky should have done given the situation. Do you expect him not to give his own opinion, given the other person's challenge? Or was it instead something in particular about the way Yudkowsky did it? Isn't arguing inevitable and all we can do is try to build better dialogue quality? (That has been my conclusion for the last few years). Either way, I don't see the hubris you seem to. My usual complaints of discussions is that they are not well educated enough and people tend to say things that are too vague to be useful, or outright unsupported. However I rarely see a discussion and think "Well the root problem here is that they are too arrogant", so I'd like to know what your reasoning is.
It may be relevant that in real life I am known by some as being "aggressive" and "argumentative". Though you probably could have inferred that based on my position but I'd like to keep everything about my position as transparent as possible.
Thank you for your time.
If I were the host I would not like it if one of my guests tried to end a conversation with "We'll have to agree to disagree" and the other guest continued with "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree." In my book this is obnoxious behavior.
Having fun at someone else's expense is one thing, but holding it up in an early core sequences post as a good thing to do is another. Given that we direct new Less Wrong readers to the co... (read more)