This is full of awesome ideas, somewhat shackled by trying to fit them all into conflict vs mistake theory. Power and persuasion are many-dimensional on both timeframe and application, and a binary classification is useful only for a very coarse tactical choice.
It's an interesting experiment to apply the classification to animals. If you think conflict vs mistake is about judgement and acceptance, you'll apply mistake theory to animals and forgive them because they know no better. If you think it's about tactics to improve things, you'll apply conflict theory and behaviorally condition them (or physically restrain) to do what you prefer. Or maybe it's just obvious that mistake theory applies to dogs and conflict theory to cats.
somewhat shackled by trying to fit them all into conflict vs mistake theory
:D Yeah, fair point, I just realised I don't link this at all with my earlier post/comment in which I frame conflict theories as strategies for 0-sum games vs mistake theories as strategies for positive-sum games.
My historical trajectory is a story about ehhh... entities playing ever larger(in dimensions of spacetime, energy used, information contained whatever, number of entities involved, diversity of the entities involved) positive sum games. While not becoming a single clonal thing. I wonder if that statement holds up if I try to formalize it. I think that's a thought that's been bounced around my head ever since I read https://www.quantamagazine.org/a-new-thermodynamics-theory-of-the-origin-of-life-20140122/
I didn't even think about power and persuasion actually, which now seems very odd. I guess I was thinking in terms of utility outcomes, not the means by which you get to those outcomes.
Gross buzzwords but correctly used?
This all started with a comment on https://www.lesswrong.com/posts/6eNy7Wg7X7Mb622D7/two-kinds-of-mistake-theorists?commentId=TdF8Ra9dGG6ZfaQMf#TdF8Ra9dGG6ZfaQMf which led to me getting a link to https://www.lesswrong.com/posts/no5devJYimRt8CAtt/in-defence-of-conflict-theory and I started writing a reply to that but realised it was extremely long so I made it a separate post.
Intro
Very nice and succinct. This immediately made me think of art vs engineering. Ambiguous domains where people can't quite serialize how they produce good outputs(though we can recognize good output when we see it, wafts of P vs NP) vs domains where we can algorithmize the production of good outputs and take the ... human flair out of it.
This post is me trying to work out what I think the 'hard work' is.
Is there a point at which conflict theory makes itself obsolete?
Is it still useful in the present, at least for people with utilitarian ethical axioms(all humans' utility is valuable, and aggregate utility should be maximised)?
Mistake theory is about maximising utility for a set of entities, conflict theory is about expanding the set of entities under consideration. If you already want to maximize total human utility(do you though? and do your actions reflect this truly?), what can conflict theory do for you?
Mood-setting story: let's pretend we believe history advances over time by some metric
History(including biological history) progresses by expanding the circle of concern, making an individual's utility function include more and more entities outside the individual. It starts as a singular point, the individual itself(say among pre-social animals), grows to include the parent and children(in animals that take care of their own young), grows to include the expanded family group(most famous in social mammals and eusocial insects etc), grows to include the very expanded family group(tribes and proto-states and city-states).
Within human history you expand the circle to include women, slaves, children, poor people, people of the same religion or appearance or language group, people absolutely everywhere currently alive, people alive in the past(ancestor worship), potential people alive in the future, animals, nature as a whole[1].
The utility function held by an average person today has expanded to become some sort of universal(dare I say, divine) plan of perfect aesthetic optimization that has an opinion about absolutely everything and can put a moral weight on any action you perform.
If you are a universal human utilitiarian, the expansion of the utility function to include all humans is ... tautological. It's already expanded to that point. The axiom is already maximise aggregate human utility(yes, there are edge cases, utility monsters bla bla).
However, if you hold some earlier(or just more restrictive) ethical view, each expansion of the utility function requires some sort of conflict theoretical structure. You had to be persuaded to start caring about this category. This is, perhaps, a 0-sum game looking activity. You have to now spread the same amount of effort improving the utility of more entities(it's usually not actually 0-sum, because ... lots of reasons at each step).
For a universal human utilitarian, all previously existing human conflict theories are unnecessary(or at best, heuristic) and all you need is mistake theory: how do we maximize utility for all humans? Oh yes, this group is not being included fairly in the optimisation function, add them in, obviously. What's all the fuss about?
Of course conflict theorists can say: most people aren't universal human utilitarians. We're fighting to convince people to expand their utility function to include this group and their peculiar preferences. Echoes of this in current events abound.
What conflict theories might a utilitarian experience as actual conflict theories?
Expanding the circle of concern past humans
Maybe expanding the circle of concern to include animal welfare? Should we weight them as much as humans? Of course, you say? How much violence would we support using against human baby farms and human baby rib BBQ lovers?
How do we integrate non-human animals into our utility function? The feelings and debates spawned by this question produce the same types of feelings in me, I think, that people experienced when issues like emancipation were discussed. Obviously emancipation to me is axiomatically good, a total duh, so the conflict at the time is emotionally sterile: How on earth was anyone on the wrong side of that issue? Well, their circles of consideration were smaller.
Reducing the circle of concern
Utilitarianism currently cares about too many people.
Various racist ideologies. Various tribal ideologies of all sort(whether based on culture, some biological marker, some aspect of heredity, geographical location). For instance, should a country care about foreign citizens as much as it does about it's own?
Non-redemptive approaches to sin: The idea that some people's actions are so bad that their utility function should be removed from global maximum calculations or even inverted(punitive measures). Examples: criminals, kulaks, people with the wrong sexual orientation, sinners.
There's a mistake theory version of this but it's waaaaaaay more complicated and probably game theoretically unstable: achieve a balance between reforming criminals and deterring crime. You might think of the Norway penal system as a clear ideal, but is it really a stable solution? How many people would rather be in a Norwegian prison, than walking around free in their own country? Millions? Billions? What level and type of criminality can it actually handle?
Highlight low-hanging fruit, hypocrisy and/or failings of real existing utilitarian calculations.
"You claim that you are a utilitarian, but don't act accordingly, constantly defecting or leaving loads of other people's utility on the ground."
"You didn't include this group's preferences in your calculations so policies you thought they'd like they actually hate."
"There are huge errors in current policies, as revealed by better empirical measurements of outcomes."
"These policies that we thought would produce Positive Outcome A, are actually mostly producing Negative Outcome Z."
This is more in line with the examples in the OP, I think. Conflict-theoretical group action used to emergency signal that society is not living up to the utilitiarian ideal. I don't think major historical conflict theories felt like this to the people living through them.
Accidentally turning approximations of uncertain outcomes into axioms
"You think Policy A will lead to Good Outcome B with probability 50%, I think it will lead to Negative Outcome Z with probability 75%"
Why?
Putting percentages on gut feelings is necessary/terrible/unavoidable/still useful/an endless source of wasted breath and anger/a prerequisite to almost all human activity[2].
Most ways to produce future utility involve some uncertainty. People will decide to support a policy or not in an almost blackbox, internal machine learning model sort of way. It's not possible to rigorously explain why the list of bad outcomes will mostly not happen and the list of good outcomes will mostly happen. It's a complicated mishmash of personal(or inter-personal or ideological) interpretations of past similar policies(and efforts to map or metaphorise why this this is the same).
Because the internal process is mostly blackbox people end up axiomatising their policy preferences, since defending requires incredibly complicated, even complexity exploding arguments. See children asking why until you are reduced to yelling back: "Because some things are and some things are not and things that are cannot not be". And you've only travelled one branch out of a literal infinite set of logical branches connecting your policy preference to some fundamental universal axiom that all people can agree on as a ground truth.
Totally different angle: conflict theories work to invert default-exception and good-bad relationships
Private property by default - taxation is taking from the individual VS group ownership - whatever you consume as an individual is a gift.
Existing people in power are good and should stay or even grow in power VS existing people in power are bad and need to have less power. Something something Nietzsche master vs slave morality.
People should ideally conform to a sex/gender binary VS let everyone do whatever they want in the sex/gender space VS people who conform to the sex/gender binary are actually bad.
Notes
[1] Leaving a good legacy presumably was a concern in ancient Rome just as it is today for the environmental movement; actually: when did concern for legacy beyond your immediate biological descendants really appear? development of writing? development of language and story telling? modern environmental movement?
[2] STEM thinkers, analytic philosophers, mathy people, engineers are basically people who hate ever acting under untractable uncertainty(unknown unknowns, fat tailed or even statistically unmodellable situations, outcomes that depend on people's blackbox-like preferences) so they've structured their lives to avoid them as much as possible. This is why engineers' suggestions to political problems smell of the gulag. Human preferences must be standardised so that optimising for them becomes tractable.
Some connection here to Seeing like a state and probably lots of other things.
To bring out my inner Taleb: uncertainty kills stock market traders and the ones that survive defend themselves with ever more complex statistical models.
Ironically, if explaining ML decision making ends up being impossible, programmers will have created their anathema. Life will increasingly be influenced by blackboxes that are even less comprehensible than human minds.
Indeed, lack of explainability(or maybe ... human understandable explanations) is the main reason people hate impersonal societal forces. Why should someone that works hard suffer in an economic downturn? How can this be just or a good way of running the world? Good luck serializing the overall benefits of capitalism in a way that persuades someone that lost their job because someone ate a bat on the other side of the planet.
Or maybe it's because Stockholm Syndrome is evolutionarily adaptive: we don't care about being hurt by someone as long as they give us clear rules we can obey in order to avoid the pain. This has been useful for so long that we've all got the gearing for it and the only thing we hate is being punished even though we followed the rules. After all in a hostage situation or if you're a recently captured slave after a tribal war, this betrayal signals that you need to fight to the death, your controller has signalled that they will probably kill you eventually regardless of what you do.