Retrospectively, I'd say that I was doing counterintuitiveness-seeking. "Hey, look at this, the commonly used extremely simple model says that definitely P, while this more complex model (which seems to me to be more descriptive of the world) says that maybe not P." This is mildly dangerous on its own, because while it runs on truthseeking, it also subordinates that to contrarianism. And doing this on a political topic was particularly stupid of me.
Again, this is exactly the reason I put the references there. They are not a signalling device saying "look at me, I read all these things" but a tool so that I don't have to recreate their respective authors' arguments as to their respective models' degree of explanatory adequacy and why that makes sense in terms of what most of their readers already accept. This saves time for those readers of this post who have at some earlier point read some (or perhaps all) of these articles, as well as for me. The models are explicit.
The terms are, on the other hand, necessarily vague. This is a general principle of all models (to be computationally tractable for the human brain, models need to simplify and admit some level of uncertainty and errors) as well as a particular feature of political coalitions where individual people and even groups of people sometimes support/vote for some party "for idiosyncratic reasons", which expression pretty much means "for reasons that I don't bother to model because I expect that doing so wouldn't be worth the effort". I can't give you the Moon, the exact list of how every person is going to vote in the 2024, 2028, etc. elections, but I can point my finger toward the Moon and say "you know, socialists".
Yes.
Unfortunately, as far as I can tell, "left" is commonly understood to mean the whole Thrive coalition. I figured that using "left" would be more confusing/absurd-looking than using "socialist".
For the purposes of the argument, I'm using a model where political behaviors are largely a result of personality traits (thrive/survive, cognitive decoupling, and cultural class membership) with most people using the theories as justification. I.e. theories have negligible influence, they are not causes but consequences of coalitions. This is a simplification, but not an unreasonable one ("all models are wrong, some are useful").
This is exactly why I put the sources, including the "tilted political compass" model I'm referring to, right at the top. Technically the author uses the label "left" for what I'm calling "socialist", but his description of the quadrant's internal logic very clearly fits with what is usually called socialism, including by many of the people the label refers to. He even remarks:
"This has lead to a game of linguistic musical treadmills where liberals try to claim an identity apart from the left without joining the right, while leftists try to prevent them from doing so."
I edited the post slightly, hopefully it will be less ambiguous.
By changing a mind, you can change what it prefers; you can even change what it believes to be right; but you cannot change what is right. Anything you talk about, that can be changed in this way, is not 'right-ness'.
If the characters were real people, I'd say here Obert is "right" while having a wrong justification. Just extrapolate the evolutionary origins of moral intuitions into any society in approximate technological stasis. "Rightness" is how the evolutionarily stable strategy feels like from the inside, and that depends on the environment.
If the population is not limited by the availability of food, thus single mothers can feed their children, some form of low-key polygyny/promiscuity are the reproductive strategy that ends up as the only game in town.
If instead food limits population, monogamy comes out victorious (for the bulk of the population, at least). If additionally we come know that hand-labor is expensive, then we can say that women are economically valuable (even if outright regarded as assets, they are very precious) and they can negotiate comparatively good treatment (as in, compared to the next paragraph). We might see related rituals, like bride-price, or the marriage ceremony looking like a kidnapping (the theft of a valuable laborer).
On the other hand, if hand-labor is cheap, then the output of a worker may not even earn the food necessary to sustain herself, and women are economic liabilities apart from their reproductive capacity. It is under these circumstances that we can find veiling, guarding, honor killings, FGM, and sati (killing widows). Groom-price (often confused with a different form of dowry under a single label) and, to avoid it, groom kidnapping happen here, too.
"Moral progress" happens by the environment changing the payoffs to strategies. Hating other tribes goes away temporarily when they become allies, and permanently when the allied tribes merge and it becomes too difficult to tell who belongs to which tribe. ("I think I'm three-eights blegg, by my maternal grandfather and by my paternal...")
One implication is that we have so much discussion on the nature of morality exactly because it is unclear what (if any) human behavior stands the best chance of propagating itself into the future with high fidelity. Alternative phrasing: this is an age of whalefall, and we get to implement policies other than morality, the one that satisfies Moloch. (This is not a new claim: the evolutionary origins of moral intuitions means that morality is how past policies of satisfying Moloch feel like from the inside.)
That is my point: the people who think in this way are not unreasonable, they are not evil mutants or anything. They just happened to "ask the wrong question" at the starting point, and if they follow it tenaciously, they wind up with insane conclusions.
Once you have a stable epistemology based on an observer-independent reality, you can say that "oh, by the way, minds are part of causality a.k.a. reality, thus people can have beliefs about what other people believe". In the cartographic analogy, this comes out clunky: "maps are part of the terrain, therefore maps can depict facts about other maps", which I suspect is intentional, to make the claim that this is a degenerate edge case, not a central example. You can hold your nose and survey opinions.
But this is very much a second step. Try to take it first, and you stand a good chance of falling headlong into the bizarro-worldview where polls stand in for laboratories, opinions are the only sort of evidence there is, and engineers must have found a way to LARP nigh-infinite confidence because apparently their technobabble can convince most people in a way that crystal healers cannot.
I'm not arguing against relying on other people and outsourcing knowledge. I'm barely arguing for any action; mostly I'm describing what tends to happen regrettably often to people who base the definition of "knowledge" around answering questions like "who is popular" rather than "what will this program do". In fact, both epistemologies will contain the concept of empirical verification! In the anti-epistemology, going to everyone in class and privately asking "hey, is Alice popular?" is the analog of empiricism.
I don't mean to inspire cruelty. If I successfully gave you understanding, you can use it for kindness, pity or cruelty as you see fit. Mostly I wrote the last paragraphs in the tone of "Humans are Cthulhu" as seen through the eyes of someone who thinks in this anti-epistemology.
Your answer to "objective popularity" is only slightly different from common knowledge, and it has the same properties of being fundamentally observer-dependent. Ask some Greens and some Blues separately "is X popular?" where X is a politician, and you get two very different results. Similarly, "possible joke #3852 is funny" is true for one audience, false for another. "The Sun goes around the Earth" is true for a bunch of hunter-gatherers, false for a group of astronomers. Wait, wait, what? "true for some group" i.e. observer-dependence of the answer-generating process.
Compare the alternative. If someone sticks to "either the question is ill-posed, or the answer must be observer-independent" a bit too strictly, they will end up either concluding that popularity is a wrong concept and doesn't exist, or falling into the mind projection fallacy and concluding that there must be a little "is-popular" label attached to people.
From The simple truth:
“Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”
If for whatever reason someone builds their epistemology around popularity as a prototypical use-case, they will necessarily make experimental results dependent on peoples' expectations in some way. They will say, using the words from the quote, that ‘reality’ is literally made out of ‘belief’.
I was hoping to compress the description of behaviors that are otherwise baffling (surprising, difficult to explain, high-entropy) but common.
The particular claim you quoted is that, since in the anti-epistemology it is assumed that statements don't refer to anything, there is no difference between e.g. "being an astrologer" and "successfully pretending to be an astrologer". People go up to you, ask "why did I stub my toe yesterday?", you say "ah, it happened because mercury is retrograde and Jupiter is in the house of Gemini", and if they think you sounded like what an astrologer is supposed to sound like, they walk away feeling satisfied but without having learned anything.
This mostly reminds me of SSC's discussion of Jaynes' theory. An age where people talk out loud to their invisible personal ba/iri/daemon/genius / angel-on-the-shoulder, and which is -- in a similar manner as clothes -- are in practice considered loosely a part of the person (but not strictly). Roughly everyone has them, thus the particular emotional need fulfilled by them is largely factored out of human interaction. (I believe a decade or two ago there was a tongue-in-cheek slogan to the effect of "if the government wants to protect marriages, it should hire maids/nannies".) Social norms (social technology) adjust gracefully, just like they adjusted quickly and seamlessly to contraceptives factoring apart child-conception and sex. (Um.)
Separately: it would be an interesting experiment to get serial abuse victims to talk to chatbots at length. One of the strong versions of the unflattering theory says that they might get the chatbots to abuse them, because that's the probable completion to their conversation patterns.