We need a realm shielded from signaling and judgment.
To support this, there are results from economics / game theory showing that signaling equilibria can be worse than non-signaling equilibria (in the sense of Pareto inefficiency). Quoting one example from http://faculty.econ.ucdavis.edu/faculty/bonanno/teaching/200C/Signaling.pdf
...So the benchmark is represented by the situation where no signaling takes place and employers -- not being able to distinguish between more productive and less productive applicants and not having any elements on which to base a guess -- offer the same wage to every applicant, equal to the average productivity. Call this the non-signaling equilibrium. In a signaling equilibrium (where employers’ beliefs are confirmed, since less productive people do not invest in education, while the more productive do) everybody may be worse off than in the non-signaling equilibrium. This occurs if the wage offered to the non-educated is lower than the average productivity (= wage offered to everybody in the non-signaling equilibrium) and that offered to the educated people is higher, but becomes lower (than the average productivity) once the costs of acquiring educ
In a scapegoating environment, having privacy yourself is obviously pretty important. However, you seem to be making a stronger point, which is that privacy in general is good (e.g. we shouldn't have things like blackmail and surveillance which generally reduce privacy, not just our own privacy). I'm going to respond assuming you are arguing in favor of the stronger point.
This post rests on several background assumptions about how the world works, which are worth making explicit. I think many of these are empirically true but are, importantly, not necessarily true, and not all of them are true.
We need a realm shielded from signaling and judgment. A place where what we do does not change what everyone thinks about us, or get us rewarded and punished.
Implication: it's bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing be
...> If others know exactly what resources we have, they can and will take all of them.
Implication: the bad guys won; we have rule by gangsters, who aren't concerned with sustainable production, and just take as much stuff as possible in the short term.
To me this feels like Zvi is talking about some impersonal universal law of economics (whether such law really exists or not, we may debate), and you are making it about people ("the bad guys", "gangsters") and their intentions, like we could get a better outcome instead by simply replacing the government or something.
I see it as something similar to Moloch. If you have resources, it creates a temptation for others to try taking it. Nice people will resist the temptation... but in a prisoners' dilemma with sufficient number of players, sooner or later someone will choose to defect, and it only takes one such person for you to get hurt. You can defend against an attempt to steal your resources, but the defense also costs you some resources. And perhaps... in the hypothetical state of perfect information... the only stable equilibrium is when you spend so much on defense that there is almost nothing left t...
I don’t think the 100% tax rate argument works, for several reasons:
Good point, I updated towards the extraction rate being higher than I thought (will edit my comment). Rich people do end up existing but they're rare and are often under additional constraints.
I will attempt to clarify which of these things I actually believe, as best I can, but do not expect to be able to engage deeper into the thread.
Implication: it's bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)
>> What I'm primarily thinking about here is that if one is going to be rewarded/punished for what one does and thinks, one chooses what one does and thinks largely based upon that - you have a signaling equilibria, as Wei Dei notes in his top-level comment. I believe that this in many situations is much worse, and will lead to massive warping of behavior in various ways, even if those rewarding/punishing were attempting to be just (or even if they actually were just, if there wasn't both common knowledge of this and agreement on what is and isn...
I agree that privacy would be less necessary in a hypothetical world of angels. But I don't find it convincing that removing privacy would bring about such a world, and arguments of this type (let's discard a human right like property / free speech / privacy, and a world of angels will result) have a very poor track record.
That isn't the same as arguing against privacy. If someone says "I think X because Y" and I say "Y is false for this reason" that isn't (necessarily) arguing against X. People can have wrong reasons for correct beliefs.
It's epistemically harmful to frame efforts towards increasing local validity as attempts to control the outcome of a discussion process; they're good independent of whether they push one way or the other in expectation.
In other words, you're treating arguments as soldiers here.
(Additionally, in the original comment, I was mostly not saying that Zvi's arguments were unsound (although I did say that for a few), but that they reflected a certain background understanding of how the world works)
Let's get back to the world of angels problem. You do seem to be saying that removing privacy would get us closer to a world of angels. Why?
OK, I can defend this claim, which seems different from the "less privacy means we get closer to a world of angels" claim; it's about asymmetric advantages in conflict situations.
In the example you gave, more generally available information about people's locations helps Big Bad Wolf more than Little Red Hood. If I'm strategically identifying with Big Bad Wolf then I want more information available, and if I'm strategically identifying with Little Red Hood then I want less information available. I haven't seen a good argument that my strategic position is more like Little Red Hood's than Big Bad Wolf's (yes, the names here are producing moral connotations that I think are off).
So, why would info help us more than our enemies? I think efforts to do big, important things (e.g. solve AI safety or aging) really often get derailed by predatory patterns (see Geeks, Mops, Sociopaths), which usually aren't obvious to the people cooperative with the original goal for a while. These patterns derail the group and cause it to stop actually targeting its original mission. It seems like having more information about strategies would help solve this problem.
Of course, it also gives the preda
...Yes, less privacy leads to more conformity. But I don't think that will disproportionately help small projects that you like. Mostly it will help big projects that feed on conformity - ideologies and religions.
OK, you're right that less privacy gives significant advantage to non-generative conformity-based strategies, which seems like a problem. Hmm.
Your notion of trust seems like it’s conflation of two opposite things meant by the word.
The first relates to coordination towards clarity, a norm of using info to improve the commons. The second is about covering for each other in an environment where information is mainly used to extract things from others.
Related: http://benjaminrosshoffman.com/humility-argument-honesty/ http://benjaminrosshoffman.com/against-neglectedness/ http://benjaminrosshoffman.com/model-building-and-scapegoating/
Ben Hoffman's views on privacy are downstream of a very extreme world model. On http://benjaminrosshoffman.com/blackmailers-are-privateers-in-the-war-on-hypocrisy a person comments under the name 'declaration of war' and Ben says:
I was a little surprised to see someone else express opinions so similar to my true feelings here (which are stronger than my endorsed opinions), but they’re not me.
Here are two relevant quotes:
It's not surprising if privacy has value for the person preserving it. It's very surprising if it has social val...
I argue in the last piece that it is common even now for people to engineer blackmail material against others and often also against themselves, to allow it to be used as collateral and leverage. That a large part of job interviews is proving that you are vulnerable in these ways.
I don't see anything about existing practices for job interviews in the previous piece.
There's another largely-unaddressed element to the debate: underlying freedoms of transaction and of information-handling. All of the arguments about blackmail are about it as an incentive for something - why are we not debating the things themselves? Arguments against gossip and investigation are not necessarily arguments against blackmail.
Before addressing the incentives, you should seek clarity/agreement on what behaviors you're trying to encourage and prevent. I still have heard very few examples of things that are acceptable without money involvement (investigating and publishing someone for spite or social one-ups) and become unacceptable only because of the blackmail.
Follow-up to: Blackmail
[Note on Compass Rose response: This is not a response to the recent Compass Rose response, it was written before that, but with my post on Hacker News I need to get this out now. It has been edited in light of what was said. His first section is a new counter-argument against a particular point that I made – it is interesting, and I have a response but it is beyond scope here. It does not fall into either main category, because it is addressing a particular argument of mine rather than being a general argument for blackmail. The second counter-argument is a form of #1 below, combined with #2, #3 and #4 (they do tend to go together) so it is addressed somewhat below, especially the difference between ‘information tends to be good’ and ‘information chosen, engineered and shared so to be maximally harmful tends to be bad.’ My model and Ben’s of practical results also greatly differ. We intend to hash all this out in detail in conversations, and I hope to have a write-up at some point. Anyway, on to the post at hand.]
There are two main categories of objection to my explicit thesis that blackmail should remain illegal.
Today we will not address what I consider the more challenging category. Claims that while blackmail is bad, making it illegal does not improve matters. Mainly because we can’t or won’t enforce laws, so it is unclear what the point is. Or costs of enforcement exceed benefits.
The category I address here claims blackmail is good. We want more.
Key arguments in this category:
A key assumption is that blackmail mostly targets existing true bad behavior. I do not think this is true. For true or bad or for existing. For details, see the previous post.
Such arguments also centrally argue against privacy. Blackmail advocates often claim privacy is unnecessary or even toxic.
It’s one thing to give up on privacy in practice, for yourself, in the age of Facebook. I get that. It’s another to argue that privacy is bad. That it is bad to not reveal all the information you know. Including about yourself.
This radical universal transparency position, perhaps even assumption, comes up quite a lot recently. Those advocating it act as if those opposed carry the burden of proof.
No. Privacy is good.
A reasonable life, a good life, requires privacy.
I
We need a realm shielded from signaling and judgment. A place where what we do does not change what everyone thinks about us, or get us rewarded and punished. Where others don’t judge what we do based on the assumption that we are choosing what we do knowing that others will judge us based on what we do. Where we are free from others’ Bayesian updates and those of computers, from what is correlated with what, with how things look. A place to play. A place to experiment. To unwind. To celebrate. To learn. To vent. To be afraid. To mourn. To worry. To be yourself. To be real.
We need people there with us who won’t judge us. Who won’t use information against us.
We need having such trust to not risk our ruin. We need to minimize how much we wonder, if someone’s goal is to get information to use against us. Or what price would tempt them to do that.
Friends. We desperately need real friends.
II
Norms are not laws.
Life is full of trade-offs and necessary unpleasant actions that violate norms. This is not a fixable bug. Context is important for both enforcement and intelligent or useful action.
Even if we could fully enforce norms in principle, different groups have different such norms and each group’s/person’s norms are self-contradictory. Hard decisions mean violating norms and are common in the best of times.
A complete transformation of our norms and norm principles, beyond anything I can think of in a healthy historical society, would be required to even attempt full non-contextual strong enforcement of all remaining norms. It is unclear how one would avoid a total loss of freedom, or a total loss of reasonable action, productivity and survival, in such a context. Police states and cults and thought police and similar ideas have been tried and have definitely not improved this outlook.
What we do for fun. What we do to make money. What we do to stay sane. What we do for our friends and our families. What maintains order and civilization. What must be done.
Necessary actions are often the very things others wouldn’t like, or couldn’t handle… if revealed in full, with context simplified to what gut reactions can handle.
Or worse, with context chosen to have the maximally negative gut reactions.
There are also known dilemmas where any action taken would be a norm violation of a sacred value. And lots of values that claim to be sacred, because every value wants to be sacred, but which we know we must treat as not sacred when making real decisions with real consequences.
Or in many contexts, justifying our actions would require revealing massive amounts of private information that would then cause further harm (and which people very much do not have the time to properly absorb and consider). Meanwhile, you’re taking about the bad-sounding thing, which digs your hole deeper.
We all must do these necessary things. These often violate both norms and formal laws. Explaining them often requires sharing other things we dare not share.
I wish everyone a past and future Happy Petrov Day
Part of the job of making sausage is to allow others not to see it. We still get reliably disgusted when we see it.
We constantly must claim ‘everything is going to be all right’ or ‘everything is OK.’ That’s never true. Ever.
In these, and in many other ways, we live in an unusually hypocritical time. A time when people need be far more afraid both to not be hypocritical, and of their hypocrisy being revealed.
We are a nation of men, not of laws.
But these problems, while improved, wouldn’t go away in a better or less hypocritical time. Norms are not a system that can have full well-specified context dependence and be universally enforced. That’s not how norms work.
III
Life requires privacy so we can not reveal the exact extent of our resources.
If others know exactly what resources we have, they can and will take all of them. The tax man who knows what you can pay, what you would pay, already knows what you will pay. For government taxes, and for other types of taxes.
This is not only about payments in money. It is also about time, and emotion, and creativity, and everything else.
Many things in life claim to be sacred. Each claims all known available resources. Each claims we are blameworthy for any resources we hold back. If we hold nothing back, we have nothing.
That which is fully observed cannot be one’s slack. Once all constraints are known, they bind.
Slack requires privacy. Life requires slack.
The includes our decision making process.
If it is known how we respond to any given action, others find best responses. They will respond to incentives. They exploit exactly the amount we won’t retaliate against. They feel safe.
We seethe and despair. We have no choices. No agency. No slack.
It is a key protection that one might fight back, perhaps massively out of proportion, if others went after us. To any extent.
It is a key protection that one might do something good, if others helped you. Rather than others knowing exactly what things will cause you to do good things, and which will not.
It is central that one react when others are gaming the system.
Sometimes that system is you.
World peace, and doing anything at all that interacts with others, depends upon both strategic confidence in some places, and strategic ambiguity in others. We need to choose carefully where to use which.
Having all your actions fully predictable and all your information known isn’t Playing in Hard Mode. That’s Impossible Mode.
I now give specific responses to the six claims above. This mostly summarizes from the previous post.
* – I want to be very clear that yes, information in general is good. But that is a far cry from the radical claim that all and any information is good and sharing more of it is everywhere and always good.