Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
varsel20

This seems to be almost equivalent to irreversibly forming a majority voting bloc. The only difference is how they interact with the (fake) randomization: by creating a subagent, it effectively (perfectly) correlates all the future random outputs. (In general, I think this will change the outcomes unless agents' (cardinal) preferences about different decisions are independent).

The randomization trick still potentially helps here: it would be in each representative's interest to agree not to vote for such proposals, prior to knowing which such proposals will come up and in which order they're voted on. However, depending on what fraction of its potential value an agent expects to be able to achieve through negotiations, I think that some agents would not sign such an agreement if they know they will have the chance to try to lock their opponents out before they might get locked out.

Actually, there seems to be a more general issue with ordering and incompatible combinations of choices - splitting that into a different comment.

varsel40

(It follows that an artificial intelligence just a tiny bit smarter than Einstein and von Neumann would be as much more productive than them as they are in relation to other mathematician/physicists).

I don't think this necessarily does follow. I think it only follows that such an AI would be much more productive than average in a population of Einsteins.

varsel50

I don't think that necessarily reveals people's preferences; that would imply that they choose that outcome. I think in most cases people are ignorant of what is going to happen, or know only in an abstract sense. Those who actually know what they're in for, tend not to die that way..