you’ll pay more for a contract on coin B. You’ll do that because other people might figure out if it’s an always-heads coin or an always-tails coin. If it’s always heads, great, they’ll bid up the market, it will activate, and you’ll make money. If it’s always tails, they’ll bid down the market, and you’ll get your money back.
So8res seems to be arguing that this reasoning only holds if your own purchase decision can't affect the market (say, if you're making a private bet on the side and both you and your counter-party are sworn to Bayesian secrecy). If your own bet could possibly change which contract activates, then you need to worry that contract B activates because you bid more than your true belief on it, in which case you lose money in expectation.
(Easy proof: Assume all market participants have precisely the same knowledge as you, and all follow your logic; what happens?)
I think dynomight's reasoning doesn't quite hold even when your own bet is causally isolated, because:
You can sort-of eliminate assumption #2 if you rework the example so that your true beliefs about A and B are essentially tied, but if they're essentially tied then it doesn't pragmatically matter if we get the order wrong. Assumption #2 places a quantitative bound on how wrong they can be based on how plausible it is that the market outperforms your own judgment.
Agreed. As a long-time reader of Schneier's blog, I was quite surprised by Schneier's endorsement, and I would have cited exactly those two essays. He's written a bunch of times about bad things that humans might intentionally use AI to do, talking about things like AI propaganda, AI-powered legal hacks, and AI spam clogging requests for public comments, but I would have described him as scornful of concerns about x-risk or alignment.
So maybe the bit is that we will tend to find ourselves in a world where kings are at, or past, the equilibrium level, but the market for wizards will still support expansion and growth.
Maybe, but I don't feel like it's a coincidence that we find ourselves in such a world.
Consider that the key limited resource for kings is population (for followers), but increasing population will also tend to increase the number of people who try to be kings. Additionally, technology tends to increase the number of followers that one king could plausibly control, and so reduces the number of kings we need.
Contrariwise, increasing population and technology both tend to increase the number of available wizard specializations, the maximum amount a given wizard can plausibly learn within any given specialty, and the production efficiency of most resources that could plausibly be a bottleneck for wizardry.
(Though I feel I should also confess that I'm reasoning this out as I go; I hadn't thought in those terms before I made the root comment.)
There's a technical sense in which writing a piece of computer software consumes electricity and calories and so it's not "from nothing", but I think that that framing does more to obscure than to illuminate the difference that I'm pointing to.
If the total value of everything in the wizard's workshop is higher when they finish than it was when they started, then I think it makes sense to say that the wizard has created value, even if they needed some precursors to get the process started.
I think an important distinction is that wizards create and kings allocate; if you have a bunch of wizards, they can all wield their powers mostly without interfering with each other and their results can accumulate, whereas if you have a bunch of kings then (beyond some small baseline amount) they basically compete for followers and the total power being wielded doesn't increase.
On my model, the strongest individual people around are kings, but adding more kings doesn't typically make civilization stronger, because kings basically move power around instead of creating it. (Though kings can indirectly create power by e.g. building schools, and they can reveal hidden power by taking a power that was previously being squandered or fighting against itself and directing it to some useful end.)
I do think it's pretty unfortunate that the strategies that make civilization stronger are often not great strategies for maximizing personal power. I think a lot of civilizational ills can be traced back to this fact.
Your example is wrong becuase you are not leaving the A+B case unchanged.
On what basis do you claim that the A+B case should be unchanged? The entire point of the example is that Carol now actually has the power to stop A+B and thus they actually can't do anything without her on board.
If you are intending to make some argument along the lines of "a veto is only a formal power, so we should just ignore it" then the example can trivially be modified so that B's resources are locked in a physical vault with a physical lock that literally can't be opened without C. The fact that B can intentionally surrender some of his capabilities to C is a fact of physical reality and exists whether you like it or not.
I think we already live in a world where, if you are dealing with a small business, and the owner talks to you directly, it's considered acceptable to yell at them if they wrong you. This does occasionally result in people yelling at small business owners for bad reasons, but I think I like it better than the world where you're not allowed to yell at them at all.
The main checks on this are (a) bystanders may judge you if they don't like your reasons, and (b) the business can refuse to do any more business with you. If society decides that it's OK to yell at a company's designated representative when the company wrongs you, I expect those checks to function roughly equally well, though with a bit of degradation for all the normal reasons things degrade whenever you delegate.
(The company will probably ask their low-level employees to take more crap than the owners would be willing to take in their place, but similarly, someone who hires mercenaries will probably ask those mercenaries to take more risk than the employer would take, and the mercenaries should be pricing that in.)
But they need money for food and shelter.
So do the mercenaries.
The mercenaries might have a legitimate grievance against the government, or god, or someone, for putting them in a position where they can't survive without becoming mercenaries. But I don't think they have a legitimate grievance against the village that fights back and kills them, even if the mercenaries literally couldn't survive without becoming mercenaries.
And as far as moral compromises go, choosing to be a cog in an annoying, unfair, but not especially evil machine is a very mild one.
Shouting at them is a very mild response.
You say you don't expect the shouting to do any good, so what makes it appropriate? If we all go around yelling at everyone who represents something that upsets us, but who has a similar degree of culpability to the gate attendant, we're going to cause a lot of unnecessary stress and unhappiness.
If the mercenary band is much stronger than your village and you have no realistic chance of defeating them or saving anyone, I still think it's reasonable and ethical to fight back and kill a few of them, even if it makes some mercenaries worse off and doesn't make any particular person better off.
At a systemic level, this still acts as an indirect incentive for people to behave better. (Hopefully, the risk of death increases the minimum money you need to offer someone to become a mercenary raider, which makes people less inclined to hire mercenary raiders, which leads to fewer mercenary raids. Similarly, shouting at a secretary hopefully indirectly increases the cost of hiring secretaries willing to stand between you and a person you're harming.)
Though I also kinda feel it's a fair and legitimate response even if you can prove in some particular instance that it definitely won't improve systemic incentives.
Bad people react to this by getting angry at the gate attendant; good people walk away stewing with thwarted rage.
Shouting at the attendant seems somewhat appropriate to me. They accepted money to become the company's designated point of interface with you. The company has asked you to deal with the company through that employee, the employee has accepted the arrangement, the employee is being compensated for it, and the employee is free to quit if this deal stops being worth it to them. Seems fair to do to the employee whatever you'd do to the company if you had more direct access. (I don't expect it to help, but I don't think it's unfair.)
Extreme example, but imagine someone hires mercenaries to raid your village. The mercenaries have no personal animosity towards you, and no authority to alter their assignment. Is it therefore wrong for you to kill the mercenaries? I'm inclined to say they signed up for it.
JPEG has a standardized compression scheme. Many images use this scheme, and the same tool decompresses all of them. The tool does not require prior knowledge of what any of the images look like when decompressed.
Unpacking a sazen does not rely on knowledge of a particular compression scheme, it relies on knowledge of the referent. This seems pretty different to me.
Furthermore, the fact that you can't read JPEGs without the right tool doesn't seem to me like it has much to do with the compression. Raw, uncompressed bitmap files are also pretty unreadable to humans without the appropriate tools. And you can achieve lossy compression of a picture without changing its format, e.g. by reducing its resolution.