How do socially-constructed concepts work?!
Negative example: trees. Trees exist, and trees are not socially constructed. An alien AI observing Earth from behind a Cartesian veil would be able to compress its observations by formulating a concept that pretty closely matches what we would call tree, because the atom-configurations we call "trees" robustly have a lot of things in common: once the AI has identified something as a "tree" by observing its trunk and leaves, the AI can make a lot of correct predictions about the "tree" having roots, this-and-such cellular walls, &c. without observing them directly, but rather by inference from knowledge about "trees" in general.
Positive example: Christmas. Christmas exists. An alien AI observing Earth from behind a Cartesian veil would be able to make better predictions about human behavior in many places around Epoch time 62467200 ± 31557600·n (for integer n ) by formulating the concept of "Christmas". However, Christmas is socially constructed: if humans didn't haven't a concept of "Christmas", there would be no Christmas (the AI-trick for improving predictions using the idea of "Christmas" would stop working), but if humans didn't have a concept of trees, there would still be trees (the AI-trick for improving predictions using the idea of "trees" would still work).
Semi-positive example: adulthood. Adulthood exists. There's a Sorites situation on exactly how old a human has to be to be an "adult", and different human cultures make different choices about where to draw that line. But this isn't just a boring Sorites non-problem, where different agents might use different communication signals without disagreeing about about the underlying reality (like when I say it's "hot" and you say it's "not hot, just warm" and our friend Hannelore says it's "heiß", but we all agree that it's exactly 303.6 K): an alien AI observing Earth from behind a Cartesian veil can make better predictions about whether I'll be allowed to sign contracts by reasoning about whether my Society considers me an "adult", not by directy using the simple measurement test that Society usually uses to make that determination, with exceptions like minor emancipation.
My work-in-progress take: an agent outside Society observing from behind a Cartesian veil, who only needs to predict, but never to intervene, can treat socially-constructed concepts the same as any other: "Christmas" is just a pattern of behavior in some humans, just like "trees" are a pattern of organic matter. What makes social construction special is that it's a case where a "map" is exerting control over the "territory": whether I'm considered an "adult" isn't just putting a semi-arbitrary line on the spectrum of how humans differ by age (although it's also that); which Schelling point the line settles on is used as an input into decisions—therefore, predictions that depend on those decisions also need to consider the line, a self-fulfilling prophecy. Alarmingly, this can give agents an incentive to fight over shared maps!
Trees exist, and trees are not socially constructed.
A lot of problems with socially constructed concepts rely on their malleability: culture changes them all the time. But if culture had the power and technology to similarly change and create physical things on both sides of the border of (the extension of) the concept of trees, that concept could have similar problems, especially if people cared to fight over it.
So maybe the concept of chairs is a better example? Are chairs socially constructed? What about topological spaces? I'm guessing presence of a fight over a concept is more central to it being "socially constructed" in a problematic way than its existence primarily in minds. When there is a fight over a concept, existing outside of minds can help it persevere, but only to the extent that the capability to physically change it is limited.
Given how much the comments on this one diverge, sounds like there's a lot of confusion around it (some of which is confusion around how words work more generally). Guess I'd better talk about it.
I will be focused more on the abstraction aspects than the game-theoretic aspects, though.
Trees exist. The category "tree", as opposed to "shrub" or "plant" or "region of space-time" is a modeling choice - a question of tradeoff between compression efficiency and precision in the domain you're predicting.
Likewise "Christmas", and "Adult". If you get better understanding of what it means to individuals or regions, you can predict better how they'll behave.
Which leads to my definition of "socially constructed" - these are categories or definitions where it's necessary (or at least very convenient) to have shared understanding o...
Entropy and temperature inherently require the abstraction of macrostates from microstates. Recommend reading this: http://www.av8n.com/physics/thermo/entropy.html if you haven't seen this before (or just want an unconfused explanation).
At some point I need to write a post on purely Bayesian statistical mechanics, in a general enough form that it's not tied to the specifics of physics.
I can probably write a not-too-long explanation of how abstraction works in this context. I'll see what I can do.
One we already talked about together is the problem of defining the locality of goals. From an abstraction point of view, local goals (goals about inputs) and non-local goals (goals about properties of the world) are both abstractions: they throw away information. But with completely different results!
When do we learn abstractions bottom-up (like identifying regularities in sense data) versus top-down (like using a controlled approximation to a theory that you can prove will converge to the right answer)? What are the similarities between what you get out at the end?
Abstraction learning in general is an area where I'm not yet fully satisfied with my own understanding, but I'll see if I can set up anything interesting around this.
Not quite sure how specifically this connects, but I think you would appreciate seeing it.
As a good example of the kind of gains we can get from abstraction, see this exposition of the HashLife algorithm, used to (perfectly) simulate Conway's Game of Life at insane scales.
Earlier I mentioned I would run some nontrivial patterns for trillions of generations. Even just counting to a trillion takes a fair amount of time for a modern CPU; yet HashLife can run the breeder to one trillion generations, and print its resulting population of 1,302,083,334,180,208,337,404 in less than a second.
Ooh, good one. If I remember the trick to the algorithm correctly, it can indeed be cast as abstraction.
I'm working on a post of examples for how to formulate problems involving abstraction (using the abstraction formulation here). This isn't going to solve problems, just show how to set them up mathematically.
To that end, I'd like to hear particular problems people are interested in which intuitively seem to involve abstraction. Examples of the sort of thing I have in mind:
There is a high chance that your request (or at least something very similar to it) will be incorporated in the post. So, what examples would people like to see?