In principle, IF the norms are more important to you than to everyone else combined, then there should be some amount you can pay them that is higher than how much they care about the norms but lower than how much you care about them.
(In practice, finding that amount may be hard, and treating it too much like a transaction may have friendship-corroding effects.)
Your final sentence clarified some things for me:
I now realize that if all players have perfect knowledge of the exact conditions under which you...
Been thinking more about this claim:
Also, with rational agents silence is just as good as dishonesty.
I don't think this claim particularly matters to the thrust of your post, since I think we agree that you're not playing with perfectly rational agents, but I'm interested in the claim as a matter of game theory.
To be clear, I'm interpreting this as saying something at least as strong as: "In a game of Catan where there is common knowledge that all players are perfectly rational, speaking a falsehood is never more advantageous for the speaker than remaining...
Have you talked explicitly with them about the norms you'd like to have? I, for one, would not have assumed that "don't try to manipulate other players to your own advantage" would be an expected norm, but would probably be willing to go along with it if the group asked me to.
You also might consider offering to play with a handicap, so that they don't feel that they need to target you to prevent you from winning too often.
As a rule of thumb, I strongly approve of play groups mutually agreeing on whatever rules and norms work best for them. But I also think...
I submit that "same genome" often coincides with the natural object boundary, but isn't usually a good criterion for the boundary. The common genome is not a significant part of what makes a squirrel a good object.
I think the thing we usually care about i...
Based on this and your other comments in this thread, I suspect you're mixing up questions of
It's possible to think someone is baking bread wrong without thinking that you should use violence to force them to do it differently. It's possible to think that bakers should be allowed to pick their own baking methods without thinking that all methods produce equally tasty bread.
Civilization typically uses many different levels of coercion for different...
I think you do a good job of arguing (in the earlier part of the article) that it is logically possible to drop the independence axiom without being money-pumped by giving up logical consequentialism but keeping dynamic consistency. However, I think you do a poor job of arguing (in the later parts) that we should give up consequentialism.
You examine 3 in-depth examples to try to show that we'd be fine if we dropped independence: ergodicity economics, the Allais Paradox, and the Ellsburg Paradox. In all 3 cases, it think your argument is missing a critical ...
Sure, give me meta-level feedback.
If it has an integral gain, it will notice this and try to add more and more heat until it stops being wrong. If it can't, it's going to keep asking for more and more output, and keep expecting that this time it'll get there. And because it lacks the control authority to do it, it will keep being wrong, and maybe damage its heating element by asking more than they can safely do. Sound familiar yet?
From tone and context, I am guessing that you intend for this to sound like motivated reasoning, even though it doesn't particularly remind me of motivated reaso...
At some point, a temperature control system needs to take actions to control the temperature. Choosing the correct action depends on responding to what the temperature actually is, not what you want it to be, or what you expect it to be after you take the (not-yet-determined) correct action.
If you are picking your action based on predictions, you need to make conditional predictions based on different actions you might take, so that you can pick the action whose conditional prediction is closer to the target. And this means your conditional predictions can...
Your explanation about the short-term planner optimizing against the long-term planner seems to suggest we should only see motivated reasoning in cases where there is a short-term reward for it.
It seems to me that motivated reasoning also occurs in cases like gamblers thinking their next lottery ticket has positive expected value, or competitors overestimating their chances of winning a competition, where there doesn't appear to be a short-term benefit (unless the belief itself somehow counts as a benefit). Do you posit a different mechanism for these case...
... except that you have a natural immunity (well, aversion) to adopting complex generators, and a natural affinity for simple explanations. Or at least I think both of those are true of most people.
It seems pretty important to me to distinguish between "heuristic X is worse than its inverse" and "heuristic X is better than its inverse, but less good than you think it is".
Your top-level comment seemed to me like it was saying that a given simple explanation is less likely to be true than a given complex explanation. Here, you seem to me like you're saying ...
"Possible" is a subtle word that means different things in different contexts. For example, if I say "it is possible that Angelica attended the concert last Saturday," that (probably) means possible relative to my own knowledge, and is not intended to be a claim about whether or not you possess knowledge that would rule it out.
If someone says "I can(not) imagine it, therefore it's (not) possible", I think that is valid IF they mean "possible relative to my understanding", i.e. "I can(not) think of an obstacle that I don't see any way to overcome".
(Note tha...
I interpreted the name as meaning "performed free association until the faculty of free association was exhausted". It is, of course, very important that exhausting the faculty does not guarantee that you have exhausted the possibility space.
Alas, unlike in cryptography, it's rarely possible to come up with "clean attacks" that clearly show that a philosophical idea is wrong or broken.
I think the state of philosophy is much worse than that. On my model, most philosophers don't even know what "clean attacks" are, and will not be impressed if you show them one.
Example: Once in a philosophy class I took in college, we learned about a philosophical argument that there are no abstract ideas. We read an essay where it was claimed that if you try to imagine an abstract idea (say, the concept of a dog...
An awful lot of people, probably a majority of the population, sure do feel deep yearning to either inflict or receive pain, to take total control over another or give total control to another, to take or be taken by force, to abandon propriety and just be a total slut, to give or receive humiliation, etc.
This is rather tangential to the main thrust of the post, but a couple of people used a react to request a citation for this claim.
One noteworthy source is Aella's surveys on fetish popularity and tabooness. Here is an older one that gives the % of people...
If you're a moral realist, you can just say "Goodness" instead of "Human Values".
I notice I am confused. If "Goodness is an objective quality that doesn't depend on your feelings/mental state", then why would the things humans actually value necessarily be the same as Goodness?
What would you want such a disclaimer or hint to look like?
(I am concerned that if a post says something like "this post is aimed at low-level people who don't yet have a coherent foundational understanding of goodness and values" then the set of people who actually continue reading will not be very well correlated with the set of people we'd like to have continue reading.)
A smart human-like mind looking at all these pictures would (I claim) assemble them all into one big map of the world, like the original, either physically or mentally.
On my model, humans are pretty inconsistent about doing this.
I think humans tend to build up many separate domains of knowledge and then rarely compare them, and even believe opposite heuristics by selectively remembering whichever one agrees with their current conclusion.
For example, I once had a conversation about a video game where someone said you should build X "as soon as possible", an...
Yes, that is the sort of example I meant. Though of course this particular example does not prove that the game of Catan, in particular, has situations like this.
Based on his other reply, I expect James would want to point out that there is an equivalent equilibrium where player A, instead of saying "button N is blue", says "either button N is blue or no button is", which produces the same outcome without technically lying.
I'm coming to think that there should be some other distinction we can draw that rhymes with the truthful/lying distinction but that talks about consequences instead of semantics, and therefore can't be dodged by relabeling the signals. Still thinking about it.