I feel like the figure-ground idea is useful, but this post runs with it a little too far.
On the one hand, people definitely do have background assumptions about the overall goodness or badness of a thing, and conversations can be unproductive if the participants debate details without noticing how different their assumptions are. The figure-ground inversion is a good metaphor for the kind of shift in perspective you need to get a high-level look at seemingly contradictory models.
On the other hand though. Conversations about details are how people build their models in the first place. People usually aren't reasoning from first principles, they're taking in tons of information and making tiny updates to their models over time. And the majority of people end up not as pro- or anti- Thing zealots, but as people with complex models of Thing, amenable to further updates. To me, this looks like collective epistemology working pretty well, and doesn't support the post's claim that our failure to explicate background assumptions is a major problem with discourse.
"If you’re with someone with an opposite signal, you prioritize boosting your own signal and ignore your own corrective that actually agrees with the other person. However, when talking to someone who agrees with your signal you may instead start to argue for your corrective. And if you’re in a social environment where everyone shares your signal and nobody ever mentions a corrective you’ll occasionally be tempted to defend something you don’t actually support (but typically you won’t because people will take it the wrong way)."
Again, this seems to describe people who are doing just fine, who understand the need for nuance and can take different approaches according to social context. These don't seem like the behaviors of people who are too fundamentalist about their assumptions.
I feel like the figure-ground idea is useful, but this post runs with it a little too far.
On the one hand, people definitely do have background assumptions about the overall goodness or badness of a thing, and conversations can be unproductive if the participants debate details without noticing how different their assumptions are. The figure-ground inversion is a good metaphor for the kind of shift in perspective you need to get a high-level look at seemingly contradictory models.
On the other hand though. Conversations about details are how people build their models in the first place. People usually aren't reasoning from first principles, they're taking in tons of information and making tiny updates to their models over time. And the majority of people end up not as pro- or anti- Thing zealots, but as people with complex models of Thing, amenable to further updates. To me, this looks like collective epistemology working pretty well, and doesn't support the post's claim that our failure to explicate background assumptions is a major problem with discourse.
"If you’re with someone with an opposite signal, you prioritize boosting your own signal and ignore your own corrective that actually agrees with the other person. However, when talking to someone who agrees with your signal you may instead start to argue for your corrective. And if you’re in a social environment where everyone shares your signal and nobody ever mentions a corrective you’ll occasionally be tempted to defend something you don’t actually support (but typically you won’t because people will take it the wrong way)."
Again, this seems to describe people who are doing just fine, who understand the need for nuance and can take different approaches according to social context. These don't seem like the behaviors of people who are too fundamentalist about their assumptions.