Zack_M_Davis comments on A Suite of Pragmatic Considerations in Favor of Niceness - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (183)
Yes, this parallels why I've been finding hostility in argument increasingly disturbing lately. Insofar as people are rational and honest, they should expect to agree on questions of simple fact, and insofar as they differ on questions of value, then surely they should be able to reach some sort of game-theoretic compromise superior to the default outcome. If you can anticipate disagreeing even after extended interaction, something has gone horribly wrong. I read people's snarky swipes at the psychological motivations of their opponents, and it almost hurts---don't they see the symmetries of the situation? Instead of rushing to call the other mad, why don't they just jump to the meta level and ask, What do I (think I) know that they don't? What do they (think they) know that I don't?
Really, it should all be so simple. Figure out what questions you want to investigate, and update your model of the world based on incoming evidence, including the arguments of others. If you end up disagreeing with someone, just say: I think you're mistaken about these-and-such specific issues because of such-and-these specific reasons. That's it. That's all you have to do. Anger and indignation aren't helping you acquire the map that reflects the territory, so what would be the point?
I suppose I've lost a little bit of my humanity along the Way. What could be more traditionally wholesome than a delicious bout of righteous anger? But on reflection ... it's just not worth it. The sanctity of my map is too important. I'll get my kicks some other way.
"Insofar as people are rational and honest, they should expect to agree on questions of simple fact, and insofar as they differ on questions of value, then surely they should be able to reach some sort of game-theoretic compromise superior to the default outcome. If you can anticipate disagreeing even after extended interaction, something has gone horribly wrong."
Why would people with different motives agree? Surely they should signal holding opinions consistent with their aims, and frequently fail to update them in response to reasoned arguments - in order to signal how confident they are in their views - thereby hoping to convince others that they are correct - and to side with them.
Notice that I did say "rational and honest."
But what does this mean? Beliefs are about the world; goals are about what you would do with the world if you could rewrite it atom by atom. They're totally different things; practically any goal is compatible with any belief, unless you're infinitely convinced that some goal is literally impossible. Perhaps you're saying that agents will dishonestly argue that the world is such that their goals will be easier to achieve than they are in fact? I can think of some situations where agents would find that useful. For myself, I care about honesty.
The way I read it 'rational' and 'honest' referred to the first clause of the sentence only.
For an example of an opinion consistent with an aim, consider a big tobacco sales exec who believes that cigarettes do not cause cancer.
We probably don't actually disagree.
It's probably a bad idea to get so caught up in trappings of rationality that you lose your ability to empathize with humans and understand why, for example, they have pointless arguments.
You give me far too much credit.