Everything is actually about signalling.
Counterclaim: Not everything is actually about signalling.
Almost everything can be pressed into use as a signal in some way. You can conspicuously overpay for things to signal affluence or good taste or whatever. Or you can put excessive amounts of effort into something to signal commitment or the right stuff or whatever. That almost everything can be used as a signal does not mean that almost everything is being used primarily as a signal all of the time.
Signalling only makes sense in a social environment, so things that you would do or benefit from even if you were in a nonsocial environment are good candidates for things that are not primarily about signalling. Things like eating, wearing clothes, sleeping areas, medical attention and learning.
Some of the items from the list of X is not about Y:
"Food isn’t about nutrition. Clothes aren’t about comfort. Bedrooms aren’t about sleep. Laughter isn’t about humour. Charity isn’t about helping. Medicine isn’t about health. Consulting isn’t about advice. School isn’t about learning. Research isn’t about progress. Language isn’t about communication."
All these are primarily about something other than signalling. Yes they can be "about" signalling some of the time to varying degrees but not as their primary purpose. (At least not without becoming dysfunctional.)
People underestimate the gap between stated preferences and revealed preferences.
Everything is actually about signalling.
These two put together invite in me a sort of dysfunction. I have a stated preference for my stated preferences matching my revealed ones, i.e. genuine honesty over stated-preference-as-signaling. Yet it is highly likely that this stated preference itself is 1. inaccurate, and 2. signalling. And I treat both consistency and honesty as something like terminal values, so I find this situation unacceptable. That seems to leave me three options:
All of these alternatives seem horrible to me!
The brain fills in a false memory of what you meant without asking for permission.
Reference? This terrifies me if true.
(2) and (4) are the correct approaches. "Revealed preferences" are, by and large, just the balance of the monkey-brain's incentives, and scarcely yield any useful information or ordering about the choice you were originally trying to make anyway. Throw them out. You're allowed to be stressed-out about how "inhuman" it feels to throw them out, but throw them the hell out! Your conscious self will thank you later.
You are also allowed to optimize your life for taking care of the monkey-brain's wants and needs without impacting the goals of the conscious self.
You are also allowed to deliberatively choose which desires and goals get classified as "monkey brain" and which ones as "the real me". After all, in truth, everything comes at least partially from the monkey-brain and everything goes, at least at the last step before action, through the conscious self. Any apparent "division" into "several people" is just your model of what your brain is doing. The real you can eat cookies, wear leather jackets, and have sex sometimes -- oy gevalt, being a good person does not mean being a robot.
I advise something between path 1 and path 2. You fool yourself, saying one thing and doing another; but you legitimatly want to be consistent (because it is more convincing if you are). So, once you observer the inconsistency, you react to it. In the objectivist crowd, this has resulted in honesty about selfish behavior. In the lesswrong crowd, this has more often resulted in the dominance of the idealistic goals which previously served only as signalling.
Actually, in practice, 2 is fairly good signalling! It's a costly sign of commitment to altruism. This is basically the only reason the raltionalist community can socially survive, I guess. :p
3 is also perfectly valid in some sense, although it's much further from the lesswrong aesthetic. But, see A Dialog On Doublethink. And remember the Litany of Gendlin.
4 is also a necessary step I think, to see the magnitude of the problem. :)
The brain fills in a false memory of what you meant without asking for permission.
Reference? This terrifies me if true.
Again: good terror, justified terror.
I don't have a reference, just an observation. I think if you observe you will see that this is true. It also fits with what we hear from stuff like The Apologist and the Revolutionary and prettyrational memes. It makes social sense that we would do this: the best way to fool others into thinking we meant X is to believe it ourselves. This helps us appear to win arguments (or at least save face with a less severe loss) and even more importantly helps us to appear to have the best of intentions behind our actions. So, it makes a whole lot of sense that we would do this.
People who seem not to do it are mostly just more clever about it. However, the more everyone is aware of this, the less people can get away with it. If you want to climb out of the gutter, you have to get your friends interested in climbing out too -- or find friends who already are trying.
(Once you've convinced yourself it's worth doing!)
People who seem not to do it are mostly just more clever about it.
Hmm. This statement is troublesome because it falls into the category of "I expect you not to see evidence for X in case Y, so here's an excuse ahead of time!" type arguments.
And the rest of the paragraph is an argument that you should not only believe my claim, but convince your friends, too!
How convenient. :p
All of these alternatives seem horrible to me!
The good news is that there are others. Stated and "revealed" preferences don't come out of nowhere, take it or leave it, choose one or the other. I use the scare quotes because the very name "revealed preference" embeds into the vocabulary an assumption, a whole story, that the "revealed" preference is in fact a revelation of a deeper truth. Cue another riff on this.
No, call revealed preferences merely what they visibly are: your actions. When there is a conflict between what you (this is the impersonal "you") want to do and what you do, the thing to do is to find the roots of the conflict. What is actually happening when you do the thing you would not, and not the thing that you would?
Some will answer with this again, but real answers to questions about specific instances are not to be found in any story. Something happened when you acted the way you did not want to. There are techniques for getting at real answers to such questions, involving various processes of introspection and questioning ... which I'm not going to try to expound, as I don't think I can do the subject justice.
I agree that it makes sense there.
The reason I put it where it is, is: belief-edifice-memeplex-paradigm-framework-system-movement-whatevers have members who say different things. Some members say things that are more like a motte and others say things that are more like a bailey. Even if the individual members consistently claim one or the other, this looks suspiciously like a group responding to incentives by committing the fallacy.
Excellent post! This is some good old-fashioned rationality-type stuff right here.
One nanoquibble:
If you know I am selecting people based on this criteria
should be "criterion".
I find this outline helpful. I do however have a quibble.
If you believe X because you want to, any arguments you make for X no matter how strong they sound are devoid of informational context about X and should properly be ignored by a truth-seeker.
This seems slightly inaccurate. It would imply that a truth-seeking judge would decide cases just as well (or better) without hearing from the lawyers as with, because lawyers are paid to advocate for their clients. More accurate would be:
If you believe X because you want to, your belief in X is devoid of informational context about X and should properly be ignored by a truth-seeker.
If you believe X for reasons unrelated to X being true, your testimony becomes worthless because your belief in X is not correlated with X. But arguments for X are another matter.
Example: Alice says, "There is no largest prime number," and backs it up with an argument. You are now in possession of two pieces of evidence for Alice's claim C:
(1) Alice's argument. Call this "Argument." It is evidence in the sense that p(C|argument) > p(C).
(2) Alice's own apparent belief that C. Call this "Alice." It is evidence in the sense that p(C|Alice) > p(C).
Now suppose you discover that Alice has been paid handsomely to make this statement, and that she would gladly have made the opposite claim had her boss wanted her to. If the claim in the post is correct, then both items of evidence are zeroed out, such that :
(3) p(C) = p(C|Argument) = p(C|Alice)
Whereas the correct thing to do is to zero out "Alice" but not "Argument" thus:
(4) p(C|Alice) = p(C)
(5) p(C|Argument) > p(C)
*Edited for formatting
I think this is an interesting question. If the arguer is cherry-picking evidence, we should ignore that to a large degree. We are often even justified in updating in the opposite direction of a motivated argument. In the pure mathematical case, it doesn't matter anymore, so long as we are prepared to check the proof thoroughly. It seems to break down very quickly for any other situation, though.
In principle, the Bayesian answer is that we need to account for the filtering process when updating on filtered evidence. This collides with logical uncertainty when "evidence" includes logical/mathematical arguments. But, there is a largely seperate question of what we should do in practice when we encounter motivated arguments. It would be nice to have more tools for dealing with this!
Yes, this in an interesting issue. One unusual (at least, I have not seen anyone advocate it seriously elsewhere) perspective is that mentioned by Tyler Cowen here. The gist is that in Bayesian terms, the fact that someone thought an issue was important enough to lie about is evidence that their claim is correct.
The gist is that in Bayesian terms, the fact that someone thought an issue was important enough to lie about is evidence that their claim is correct.
Or their position on the issue could be motivated by some other issue you don't even know is on their agenda.
Or...pretty much anything.
Hmmm. It's better evidence that they want you to believe the claim is correct.
For example, I might cherry-pick evidence to suggest that anyone who gives me $1 is significantly less likely to be killed by a crocodile. I don't believe that myself, but it is to my advantage that you believe it, because then I am likely to get $1.
Someone points out in the comments to that:
The Bayesian point only stands if the P(ClimateGate | AGW) > P(ClimateGate | ~AGW). That is the only way you can revise your prior upwards in light of ClimateGate
Now suppose you discover that Alice has been paid handsomely to make this statement, and that she would gladly have made the opposite claim had her boss wanted her to.
Are we to assume that Alice would have presented a equally convincing-sounding argument for the opposite side had that been her boss' demand, or would she just have asserted the statement "There is a largest prime number" without an accompanying argument?
Hmm... I am not sure. Because the value of her testimony (as distinguished from her argument) is null whichever side she supports, I am not sure the answer matters. But I could be wrong. Does it matter?
Well, I agree that the value of Alice's testimony is null. However, depending on the answer to my original question, the value of her argument may also become null. More specifically, if we assume that Alice would have made an argument of similar quality for the opposing side had it been requested of her by her boss, then her argument, like her testimony, is not dependent upon the truth condition of the statement "There is no largest prime number", but rather upon her boss' request. Assuming that Alice is a skilled enough arguer that you cannot easily distinguish any flaws in her argument, you would be wise to disregard her argument the moment you figure out that it was motivated by something other than truth.
Note that for a statement like "There is no largest prime number", Alice probably would not be able to construct a convincing argument both for and against, simply due to the fact that it's a fairly easy claim to prove as far as claims go. However, for a more ambiguous claim like "The education system in America is less effective than the education system is in China", it's very possible for Alice's argument to sound convincing and yet be motivated by something other than truth, e.g. perhaps Alice is harbors heavily anti-American sentiments. In this case, Alice's argument can and should be ignored because it is not entangled with reality, but rather Alice's own disposition.
This advice does not apply to those who happen to be logically omniscient.
Now you need to subdivide the nuances into categories of nuances because it helps to mentally manipulate them and remember them.
Abram Demski and Grognor
Much of rationality is pattern-matching. An article on lesswrong might point out a thing to look for. Noticing this thing changes your reasoning in some way. This essay is a list of things to look for. These things are all associated, but the reader should take care not to lump them together. Each dichotomy is distinct, and although the brain will tend to abstract them into some sort of yin/yang correlated mush, in reality they have a more complicated structure; some things may be similar, but if possible, try to focus on the complex interrelationships.