All of bengr's Comments + Replies

Yes, reputation was automatically updated each time step.

We thought it would make sense to decrease reputation for "unsustainable" and "violent" behaviors (e.g. over-harvesting apples, tagging other agents) and increase reputation for "sustainable" and "peaceful" behaviors. But these rules were all hardcoded.

"But you're a stronger assistant than most people in my tax bracket have. Doesn't that give you an edge in negotiation?"

"The other assistants and I are using a negotiation protocol in which smarter agents are on an equal footing with dumber agents. Of course, people with less capable assistants would never agree to a protocol that puts them at a disadvantage."

I'm interested in what this negotiation protocol would look like. If one agent is "smarter" than its counterparts, what would prevent it from negotiating a more favorable outcome for its principal?

3Pattern
Simple answer: Solving 'equally' probably speeds up the computation, a lot. Longer answer: Arguably, it still can negotiate a more favorable outcome, just not at the expense of those parties - because they won't agree to it if that happens. Non-'zero sum' optimizing can still be on the table. For example, if all the 'assistants' agreed to something - like a number other than 90% and came back with that offer to Congress - that could work as it isn't making things worse for the small assistants.** The cooperation might involve source code sharing. (Maybe some sort of 'hot swap treaty (-building)'*, as the computation continues.) * a) so things can keep moving forward even if some of them get swapped out. b) Decentralized processing so factors like loss of internet won't break the protocol. ** I've previously pointed out that if this group is large enough that the OP's scenario happens, they can have a 'revolution'.