snarles comments on What is/are the definition(s) of "Should"? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (46)
You need a cooperative-game based theory of communication to properly define "should." "Should" is a linguistic tag which indicates that the sender is staking some of their credibility on the implicit claim that the course of action contained in the body of the message would benefit the receiver.
Certainly not true in all instances.
"You should give lots of money to charity", for instance.
It is still true in that instance. If a person^ told you, "you should give lots of money to charity," and you followed the suggestion, and later regretted it, then you would be less inclined to listen to that person's advice in the future.
^: Where personhood can be generalized.
Suppose I post a statement of shouldness anonymously on an internet forum. Does that statement have no meaning?
Anonymity cannot erase identity, it can only obscure it. Readers of the statement have an implicit probability distribution as to the possible identity of the poster, and the readers which follow the suggestion posted will update their trust metric over that probability distribution in response to the outcome of the suggestion. This is part of what I meant by generalized personhood.
What if two people have identical information on all facts about the world and the likely consequences of actions. In your model, can they disagree about shouldness?
The concept of "shouldness" does not exist in my model. My model is behavioristic.
Would you expect two people who had identical information on all facts about the world and the likely consequences of actions to get in an argument about "should", as people in more normal situations are wont to do? Let's say they get in an argument about what a third person would do. Is this possible? How would you explain it?
Then you need to expand your model. How do you decide what to do?
The decision theory of your choice.
EDIT: The difference between my viewpoint and your viewpoint is that I view language as a construct purely for communication between different beings rather than for internal planning.
a) No, how do you decide what to do?
b) So when I think thoughts in my head by myself, I'm just rehearsing things I might say to people at a future date?
c) Does that mean you have to throw away Bayesian reasoning? Or, if not, how do you incorporate a defense of Bayesian reasoning into that framework?