There's a technical sense in which writing a piece of computer software consumes electricity and calories and so it's not "from nothing", but I think that that framing does more to obscure than to illuminate the difference that I'm pointing to.
If the total value of everything in the wizard's workshop is higher when they finish than it was when they started, then I think it makes sense to say that the wizard has created value, even if they needed some precursors to get the process started.
I think an important distinction is that wizards create and kings allocate; if you have a bunch of wizards, they can all wield their powers mostly without interfering with each other and their results can accumulate, whereas if you have a bunch of kings then (beyond some small baseline amount) they basically compete for followers and the total power being wielded doesn't increase.
On my model, the strongest individual people around are kings, but adding more kings doesn't typically make civilization stronger, because kings basically move power around instead of creating it. (Though kings can indirectly create power by e.g. building schools, and they can reveal hidden power by taking a power that was previously being squandered or fighting against itself and directing it to some useful end.)
I do think it's pretty unfortunate that the strategies that make civilization stronger are often not great strategies for maximizing personal power. I think a lot of civilizational ills can be traced back to this fact.
Your example is wrong becuase you are not leaving the A+B case unchanged.
On what basis do you claim that the A+B case should be unchanged? The entire point of the example is that Carol now actually has the power to stop A+B and thus they actually can't do anything without her on board.
If you are intending to make some argument along the lines of "a veto is only a formal power, so we should just ignore it" then the example can trivially be modified so that B's resources are locked in a physical vault with a physical lock that literally can't be opened without C. The fact that B can intentionally surrender some of his capabilities to C is a fact of physical reality and exists whether you like it or not.
I think we already live in a world where, if you are dealing with a small business, and the owner talks to you directly, it's considered acceptable to yell at them if they wrong you. This does occasionally result in people yelling at small business owners for bad reasons, but I think I like it better than the world where you're not allowed to yell at them at all.
The main checks on this are (a) bystanders may judge you if they don't like your reasons, and (b) the business can refuse to do any more business with you. If society decides that it's OK to yell at a company's designated representative when the company wrongs you, I expect those checks to function roughly equally well, though with a bit of degradation for all the normal reasons things degrade whenever you delegate.
(The company will probably ask their low-level employees to take more crap than the owners would be willing to take in their place, but similarly, someone who hires mercenaries will probably ask those mercenaries to take more risk than the employer would take, and the mercenaries should be pricing that in.)
But they need money for food and shelter.
So do the mercenaries.
The mercenaries might have a legitimate grievance against the government, or god, or someone, for putting them in a position where they can't survive without becoming mercenaries. But I don't think they have a legitimate grievance against the village that fights back and kills them, even if the mercenaries literally couldn't survive without becoming mercenaries.
And as far as moral compromises go, choosing to be a cog in an annoying, unfair, but not especially evil machine is a very mild one.
Shouting at them is a very mild response.
You say you don't expect the shouting to do any good, so what makes it appropriate? If we all go around yelling at everyone who represents something that upsets us, but who has a similar degree of culpability to the gate attendant, we're going to cause a lot of unnecessary stress and unhappiness.
If the mercenary band is much stronger than your village and you have no realistic chance of defeating them or saving anyone, I still think it's reasonable and ethical to fight back and kill a few of them, even if it makes some mercenaries worse off and doesn't make any particular person better off.
At a systemic level, this still acts as an indirect incentive for people to behave better. (Hopefully, the risk of death increases the minimum money you need to offer someone to become a mercenary raider, which makes people less inclined to hire mercenary raiders, which leads to fewer mercenary raids. Similarly, shouting at a secretary hopefully indirectly increases the cost of hiring secretaries willing to stand between you and a person you're harming.)
Though I also kinda feel it's a fair and legitimate response even if you can prove in some particular instance that it definitely won't improve systemic incentives.
Bad people react to this by getting angry at the gate attendant; good people walk away stewing with thwarted rage.
Shouting at the attendant seems somewhat appropriate to me. They accepted money to become the company's designated point of interface with you. The company has asked you to deal with the company through that employee, the employee has accepted the arrangement, the employee is being compensated for it, and the employee is free to quit if this deal stops being worth it to them. Seems fair to do to the employee whatever you'd do to the company if you had more direct access. (I don't expect it to help, but I don't think it's unfair.)
Extreme example, but imagine someone hires mercenaries to raid your village. The mercenaries have no personal animosity towards you, and no authority to alter their assignment. Is it therefore wrong for you to kill the mercenaries? I'm inclined to say they signed up for it.
I have trouble understanding what's going on in people's heads when they choose to follow policy when that's visibly going to lead to horrific consequences that no one wants. Who would punish them for failing to comply with the policy in such cases? Or do people think of "violating policy" as somehow bad in itself, irrespective of consequences?
On my model, there are a few different reasons:
You might also be interested in Scott Aaronson's essay on blankfaces.
The normal way I'd judge whether somebody had correctly identified a horse "by their own lights" is to look at what predictions they make from that identification. For example, what they expect to see if they view the same object from a different angle or under different lighting conditions, or how they expect the object to react if they offer it a carrot.
It seems like we can just straightforwardly apply the conclusions from Eliezer's stories about blue eggs and red cubes (starting in Disguised Queries and continuing from there).
There is (in this story) a pattern in nature where the traits (blue, egg-shaped, furred, glowing, vanadium) are all correlated with each other, and the traits (red, cube-shaped, smooth, dark, palladium) are all correlated with each other. These patterns help us make predictions of some traits by observing other traits. This is useful, so we invent the words "blegg" and "rube" as a reference to those patterns.
Suppose we take some object that doesn't exactly match these patterns--maybe it's blue and furred, but cube-shaped and dark, and it contains platinum. After answering all these questions about the object, it might feel like there is another remaining question: "But is it a blegg or a rube?" But that question doesn't correspond to any observable in reality. Bleggs and rubes exist in our predictive model, not in the world. Once we've nailed down every trait we might have used the blegg/rube distinction to predict, there is no additional value in also classifying it as a "blegg" or "rube".
Similarly, it seems to me the difference between the concepts of "horse" and "either horse or a cow-at-night" lies in what predictions we would make about the object based on either of those concepts. The concept itself is an arbitrary label and can't be "right" or "wrong", but the predictions we make based on that concept can be right or wrong.
So I want to say that activating a horse neuron in response to a cow-at-night is "mistaken" IFF that neuron activation causes the observer to make bad predictions, e.g. about what they'll see if they point a flashlight at the object. If their prediction is something like "50% chance of brown fur, 50% chance of white-and-black spots" then maybe "either horse or cow-at-night" is just an accurate description of what that neuron means. But if they confidently predict they'll see a horse when the light is turned on, and then they actually see a cow, then there's an objective physical sense in which we can say they were wrong.
(And I basically don't buy the telos explanation from the post. More precisely, I think "this object has been optimized for property X by optimization process Y" is a valid and interesting thing you can say about an object, but I don't think it captures what we intuitively mean when we say that a perception is mistaken. I want to be able to say perceptions are right or wrong even when they're about non-optimized objects that have no particular importance to the observer's evolutionary fitness, e.g. distinguishing stars and comets. I also have an intuition that if you somehow encountered a horse-like object that wasn't casually descended from the evolution of horses, it should still be conceptually valid to recognize it as a horse, but I'm less sure about that part. I also have an intuition that telos should be understood as a relationship between the object and its optimizer, rather than an inherent property of the object itself, and so it doesn't have the correct type-signature to even potentially be the thing we're trying to get at.)
Though I do somewhat wish there was a section here that reviews the plot, for those of us who are curious about what happens in the book without reading 1M+ words.
I think I could take a stab at a summary.
This is going to elide most of the actual events of the story to focus on the "main conflict" that gets resolved at the end of the story. (I may try to make a more narrative-focused outline later if there's interest, but this is already quite a long comment.)
As I see it, the main conflict (the exact nature of which doesn't become clear until quite late) is mainly driven by two threads that develop gradually throughout the story... (major spoilers)
The first thread is Keltham's gradual realization that the world of Golarion is pretty terrible for mortals, and is being kept that way by the power dynamics of the gods.
The key to understanding these dynamics is that certain gods (and coalitions of gods) have the capability to destroy the world. However, the gods all know (Eliezer's take on) decision theory, so you can't extort them by threatening to destroy the world. They'll only compromise with you if you would honestly prefer destroying the world to the status quo, if those were your only two options. (And they have ways of checking.) So the current state of things is a compromise to ensure that everyone who could destroy the world, prefers not to.
Keltham would honestly prefer destroying Golarion (primarily because a substantial fraction of mortals currently go to hell and get tortured for eternity), so he realizes that if he can seize the ability to destroy the world, then the gods will negotiate with him to find a mutually-acceptable alternative.
Keltham speculates (though it's only speculation) that he may have been sent to Golarion by some powerful but distant entity from the larger multiverse, as the least-expensive way of stopping something that entity objects to.
The second thread is that Nethys (god of knowledge, magic, and destruction) has the ability to see alternate versions of Golarion and to communicate with alternate versions of himself, and he's seen several versions of this story play out already, so he knows what Keltham is up to. Nethys wants Keltham to succeed, because the new equilibrium that Keltham negotiates is better (from Nethys' perspective) than the status quo.
However, it is absolutely imperative that Nethys does not cause Keltham to succeed, because Nethys does not prefer destroying the world to the status quo. If Keltham only succeeds because of Nethys' interventions, the gods will treat Keltham as Nethys' pawn, and treat Keltham's demands as a threat from Nethys, and will refuse to negotiate.
So Nethys can only intervene in ways that all of the major gods will approve of (in retrospect). So he runs around minimizing collateral damage, nudges Keltham towards being a little friendlier in the final negotiations, and very carefully never removes any obstacle from Keltham's path until Keltham has proven that he can overcome it on his own.
Nethys considers is likely that this whole situation was intentionally designed as some sort of game, by some unknown entity. (Partly because Keltham makes several successful predictions based on dath ilani game tropes.)
At the end of the story, Keltham uses an artifact called the starstone to turn himself into a minor god, then uses his advanced knowledge of physics (unknown to anyone else in the setting, including the gods) to create weapons capable of destroying the world, announces that that's his BATNA, and successfully negotiates with the rest of the gods to shut down hell, stop stifling mortal technological development, and make a few inexpensive changes to improve overall mortal quality-of-life. Keltham then puts himself into long-term stasis to see if the future of this world will seem less alienating to him than the present.
Maybe, but I don't feel like it's a coincidence that we find ourselves in such a world.
Consider that the key limited resource for kings is population (for followers), but increasing population will also tend to increase the number of people who try to be kings. Additionally, technology tends to increase the number of followers that one king could plausibly control, and so reduces the number of kings we need.
Contrariwise, increasing population and technology both tend to increase the number of available wizard specializations, the maximum amount a given wizard can plausibly learn within any given specialty, and the production efficiency of most resources that could plausibly be a bottleneck for wizardry.
(Though I feel I should also confess that I'm reasoning this out as I go; I hadn't thought in those terms before I made the root comment.)