Konkvistador comments on [Poll] Who looks better in your eyes? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (98)
What I think the two choices signal and the trade offs are
Most people I would guess are discomforted by sustained duplicity. Without us necessarily realizing it, our positions on matters shift towards those that are convenient for us, either because of material gain, personal safety, reproductive success or just plain good signalling. Everyone wants to look good, especially to themselves. Most people will have a hard time "living a lie" and may also eventually fail to emulate all the aspects of behaviour a false belief may entail. The emulator is in a sense at a disadvantage compared to someone who is honest in their personally beneficial belief.
Person B may indeed fail to realize the truth because of this effect, or it may be due to other deficiencies, it dosen't matter. Plainly person B is worse at making a good map of reality than person A is. He seems to be signalling a deficiency or rather failure in rationality. But he's signalling more than just that, as I will soon show.
Person A on the other hand clearly has better map making skills. He seems to signal more rationality. But if he slips up, he is signalling he may not be the best person to associate yourself with, the benefits and gains he accrues from the his stated beliefs will be smaller than someone who is a true believer in most things convenient. If he dosen't slip up he may indeed be signalling that he is unusually comfortable with deceiving people and is harder to move with socially accepted norms, the only people who can do this flawlessly are sociopaths or those vastly more intelligent than their surroundings. Does this sound like someone who is reliably non-threatening? In fact how exactly to distinguish such an A from another A who just dosen't care about other people and wishes to preserve his own advantage?
It is safer to cooperate with Person B than person A. Person A is someone for whom it takes much more resources cognitive and otherwise to distinguish the subtypes that share your interests or will not deceive you on a particular matter compared to distinguishing different types of B. Opportunity costs matter. Needless to say if you are not yourself exceptionally gifted with such resources these may be resources you simply don't have.
Perhaps some of you may be doubting at this point that a non-plain selfish type A exists. The normative, publicly praised and endorsed course of action if you disagree with a widely accepted truth or norm is to voice this disagreement, either so the false paradigm can be overturned or so others can help you overcome your folly. Naturally the actual norm on this differs, though how strongly so depends on where. What good does do you if the same improved map making abilities that helped you overcome the potentially adaptive biases also tell you that its currently folly to try and change other peoples minds by entering public debate? Why sacrifice yourself if this has negligible impact? If you think the best strategy to do away with the falsehood with as little damage to others is to delay disclosure to a later point, or if you think its utterly hopeless that the falsehood will be done away with in your lifetime and that your sacrifice will have only minimal impact, why not be duplicitous (for the relevant time frame)? But naturally here we reach the same test all over again. It is convenient for one to believe that it is best for one to remain silent isn't it?
What I think the results of this poll might be.
I expect about a bit below two thirds will choose A. because of LW norms that value rationality and map making skills. This is somewhat counteracted by LW explicit norms on truth telling being closer to actual norms than in most places so people might feel that others are more likley to be wrong in their assessments of the negative consequences .
I think in a representative sample of people, most would choose B.
PS: Also if you recall my previous comment on finding "friendly" A's is harder than finding friendly B's, it seems to me that if LWers respond as I think they will or even more enthusiastically than that, they will be signalling they consider themselves uniquely gifted in such (cognitive and other) resources. Preferring person A to B seems the better choice only if its rather unlikely that person A is significantly smarter than you or if you are exceptionally good at identifying sociopaths and/or people who share your interests. Choice A is a "rich" man's choice. Someone who can afford to use that status distribution. I hope you can also see that for A's that vastly differ in intelligence/resources/specialized abilities cooperating for common goals is tricky.
This seems to me relevant to what the values of someone likley to build a friendly AI seem to be. Has there been discussion or a article that explored these implications that I've missed so far?
In the recent economic crisis, who was more likely to scam you? A or B? The ones that pissed away the largest amounts of other people's money were those that pissed away their own money.
Assume you know the truth, and know or strongly suspect that person A knows the truth but is concealing it.
OK. You are on the rocket team in Nazi Germany. You know Nazi Germany is going down in flames. Ostensibly, all good Nazis intend win heroically. You strongly suspect that Dr Wernher von Braun is not, however, a good Nazi. You know he is a lot smarter than you, and you strongly suspect he is issuing lots of complicated lies because of lots of complicated plots. Who then should you stick with?
Why would person A being significantly smarter be a bad thing? Just from the danger of being hacked? I'm not thinking of anything else that would weigh against the extra utility from their intelligence.
If you have two agents who can read each others source code they could cooperate on a prisoners dilemma, since they would have assurance as to not defect.
Of course we can't read each others source code, but if our intelligence or rather our ability to asses each others honesty is rather matched, the risk for the other side defecting is at its lowest possible point shy of that, (in the absence of more complex stations where we have to think about signalling to other people), wouldn't you agree? When one side is vastly more intelligent/capable, the cost of defection is clearly much much smaller for the more capable side.
All else being equal, it seems an A would rather cooperate with a B than another A, because the cost to predict defection is lower. In other words Bs have a discount on needed cognitive resources, despite their inferior maps, and even As have a discount when working with Bs! What I wanted to say with the PS post was that under certain circumstances (say very expensive cognitive resources) opportunity costs associated with a bunch of As cooperating, especially As that have group norms to actively exclude Bs, can't be neglected.
The cost to predict consciously intended defection is lower.
I can and have produced numerous examples of Bs unintentionally defecting in our society, but for a less controversial example, let us take a society now deemed horrid. Let us consider the fake Nazi Dr. Wernher von Braun. Dr. Wernher von Braun was an example of A. His associates were examples of Bs. He proceeded to save their lives by lying to them and others, causing them to be captured by the Americans rather than the Russians. The B's around him were busy trying to get him killed, and themselves killed.
I generally find it it easier to predict behaviour when people pursue their interests than when they pursue their ideals. If their behaviour matches their interests rather than a set of ideals that they hide, isn't it easier to predict their behaviour?