loqi comments on "Playing to Win" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (16)
That may be, but you said:
Which, to me, is about more than just bluffing and psychological warfare.
I don't know. I could easily imagine the answer being both, depending on circumstance, thus making as simple a characterization of them as you seem to implying pretty difficult.
This makes the point that cooperative scenarios are harder than purely competitive scenarios, not that we're particularly bad at them. "Balancing your goals with others" is in the end just another way of saying that your goals positively correlate with theirs. Most big problems (and yes, even Magic) contain agents with goals both positively and negatively correlated with yours, so "cooperative or competitive" is not, in general, a binary proposition. Do you think we're particularly bad at planning in the presence of others with positively correlated goals?
If it has competitive elements, then I certainly want to treat it as though it has competitive elements, regardless of my final strategy. But you also seem to be suggesting that approaching an objective competitively is inherently less efficient than approaching it cooperatively. Surely you don't mean that.
Take my comments in light of the context.
I can't really get a handle on where you are coming from. Are you saying that it is often useful to bluff the people you are cooperating with, or would it be a once in the blue moon kind of situation? Give an example of it helping?
Only if you have total knowledge of the situation... Consider the human body, the places where competition help it to achieve objectives, (the brain possibly and the immune system) are the portions trying to gain knowledge about the outside world. Can you tell me how competition would help the human body apart from in these situations?
Drivers often slow down or stop far ahead of time for pedestrians, wasting more of their time to do so than it costs the pedestrian to wait for the car. When I'm on foot and anticipate this, I often bluff the driver by looking away or pretending to change direction. It's minor, but effective and quite frequent.
What about perfect knowledge of a prisoner's dilemma involving non-cooperative agents?
Could you do it by signaling openly?
What do you mean by non-cooperative agents, that they always defect, or don't communicate? And do the agents have perfect knowledge or is there a third party?