All of Jonas Metzger's Comments + Replies

Just to clarify, the complete equilibrium strategy alluded to here is:

"Play 99 and, if anyone deviates from any part of the strategy, play 100 to punish them until they give in"

Importantly, this includes deviations from the punishment. If you don't join the punishment, you'll get punished. That makes it rational to play 99 and punish deviators.

The point of the Folk Theorems are that the Nash Equilibrium notion has limited predictive power in repeated games like this, because essentially any payoff could be implemented as a similar Nash equilibrium. That do... (read more)

Yeah, I already edited out some verbosity. ChatGPT is just trained to hedge too much currently. Should I take out more?

It seems to have distracted a bit from the purpose of the post: that we can define an unobjectionable way to aggregate utilities and have an LLM follow it, while still being useful for its owner.

1[anonymous]
I think verbosity is learned through corpus curation. They've been using the casual conversation tone to train chat models for awhile now. Even the earlier AI chat bot prototypes around 15 years ago were using the same type of conversational verbosity. This is just how they want to model the AI for chat, being mostly user friendly HCI type of thing. About 5 years ago, there was the news article summary AI/LLM that I think GPT-3 uses, also chatGPT, to reduce big texts into entities and reduce some sentences down to their first order logic through NLP indicators of keywords and parts of speech. Maybe it wasn't LLM, I didn't look into the code itself. I think in general AGI can't take into account of context that aren't parametrized. For instance, the same texts under different context (e.g. different entities involved, different time period and settings, characteristics of the entities themselves, etc.) That's what separate these machines from biological intelligence. If you can model everything an biological organism experiences into parameters that you can feed into an AI, then you can achieve AGI, else, the more data you are missing the further away you are from AGI.