"Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote. "
Well then, the statistical expected (average) share any agent is going to get long-term is 1/10th of the pie. The simplest solution that ensures this is the equal division; anticipating this from the start cuts down on negotiation costs, and if a majority agrees to follow this strategy (i.e agrees to not realize more than their "share"), it is also stable - anyone who ponders upsetting it risks to be the "odd man out" who eats the loss of an unsymmetric strategy.
In practice (i.e. in real life) there are other situations that are relatively stable, i.e. after a few rounds of "outsiders" bidding low to get in, there might be two powerful "insiders" who get large shares in liaison with four smaller insiders who agree to a very small share because it is better than nothing; the best the insiders can do then is to offer the four outsiders small shares also, so that each small-share individual wil be faced with the choice of cooperating and receiving a small share, or not cooperating and receiving nothing. Whether the two insiders can pull this off will depend on how they frame the problem, and how they present themselves ("we are the stabilizers that ensure that "social justice" is done and nobody has to starve").
How you can get an AI to understand setups like this (and if it wants to move past the singularity, it probably will have to) seems to be quite a problem; to recognize that statistically, it can realize no more than 1/10th, and to push for the simplest solution that ensures this seems far easier (and yet some commentators seem to think that this solution of "cutting everyone in" is somehow "inferior" as a strategy - puny humans ;-).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
It is bad to apply statistics when you don't in fact have large numbers - we have just one universe (at least until the many-world theory is better established - and anyway, the exposition didn't mention it).
I think the following problem is equivalent to the one posed: It is late at night, you're tired, and it's dark and you're driving down an unfamiliar road. Then you see two motels, one to the right of the street, one to the left, both advertising vacant rooms. You know from a visit years ago that one has 10 rooms, the other has 100, but you can't tell which is which (though you do remember that the larger one is cheaper). Anyway, you're tired, so you just choose the one on the right at random, check in, and go to sleep. As you wake up in the morning, what are your chances that you find yourself in the larger motel? Does the number of rooms come into it? (Assume both motels are 90% full.)
The paradox is that while the other hotel is not contrafactual, it might as well be - the problem will play out the same. Same with the universe - there aren't actually two universes with probabilities on which one you'll end up in.
For a version where the Bayesian update works, you'd not go to the motel directly, but go to a tourist information stall that directs vistors to either the smaller or the larger motel until both are full - in that case, expect to wake up in the larger one. In this case, we have not one world, but two, and then the reasoning holds.
But if there's only one motel, because the other burnt down (and we don't know which), we're back to 50/50.
I know that "fuzzy logic" tries to mix statistics and logic, and many AIs use it to deal with uncertain assertions, but statistics can be misapplied so easily that you seem to have a problem here.