It's a common framing, and so I don't intend to pick on you, but I think the key issue isn't levels of trust, but levels of trustworthiness. Yes, there can be feedback effects in both directions between trust and trustworthiness, but fundamentally, it is possible for people and institutions with high trustworthiness to thrive in an otherwise low-trust/trustworthiness society. Indeed, lacking competitors, they may find it particularly easy to do so, and through gradual growth and expansion, lead to a high-trust/trustworthiness society over time. It is not possible for people and institutions with high trust to thrive in an otherwise low-trust/trustworthiness society, as they will be taken advantage of.
You can't bootstrap a society to a high-trust equilibrium by encouraging people to trust more. You need to encourage them to keep their promises.
I think this line of thinking is productive. Other thoughts:
For cooperative agents to thrive among non-cooperators, they must be able to identity other cooperators. Of course you can wait for the non-cooperators to identity themselves (via an act of non-cooperation in tit-for-tat, or a costly signal), but other agents are inevitably going to rely on other heuristics and information to predict the hidden strategies of others, and, when the agents are human, they will do this in a risk-averse way.
Accordingly, a low-trust society (one in which no single entit...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.