Greetings and thank you for your attention. I am an independent researcher and creator of Conversational Game Theory, a novel computational, cognitive, and psychological consensus building game for human to human, AI to human, and AI to AI collaboration. Its notable feature is its ability to reach consensus without voting, simply through conversation, something not previously thought possible. Conflict resolution is a mechanism design of the system itself, and the only possible outcome is a "win-win" of some kind.
This year we engineered and piloted our computational system, and recently we were able to train AI agents on different perspectives to play CGT, build consensus, publish a collaborative article and demonstrated that AI agents trained on GPT score much higher benchmark testing scores that GPT without.
This gives us a capability to create a large Global Library of Consensus Articles, train AI agents on any possible perspective in any possible conflict in the world, publish consensus resolution articles to those conflicts, filter bad faith actors and disinformation/misinformation, and allow humans to join in at anytime.
This global library would serve a purpose as a training ground for LLMs. CGT brings remarkable alignment properties as collaboration is deeply embedded into the mechanism design.
We were not expecting such a profound capability to emerge so quickly, so we are readjusting our focus and seeking advice, advisors, perhaps even some co-founders. We want to continue to roll out research and testing on this capability and any advice anyone has is deeply appreciated.
Greetings and thank you for your attention. I am an independent researcher and creator of Conversational Game Theory, a novel computational, cognitive, and psychological consensus building game for human to human, AI to human, and AI to AI collaboration. Its notable feature is its ability to reach consensus without voting, simply through conversation, something not previously thought possible. Conflict resolution is a mechanism design of the system itself, and the only possible outcome is a "win-win" of some kind.
This year we engineered and piloted our computational system, and recently we were able to train AI agents on different perspectives to play CGT, build consensus, publish a collaborative article and demonstrated that AI agents trained on GPT score much higher benchmark testing scores that GPT without.
This gives us a capability to create a large Global Library of Consensus Articles, train AI agents on any possible perspective in any possible conflict in the world, publish consensus resolution articles to those conflicts, filter bad faith actors and disinformation/misinformation, and allow humans to join in at anytime.
This global library would serve a purpose as a training ground for LLMs. CGT brings remarkable alignment properties as collaboration is deeply embedded into the mechanism design.
We were not expecting such a profound capability to emerge so quickly, so we are readjusting our focus and seeking advice, advisors, perhaps even some co-founders. We want to continue to roll out research and testing on this capability and any advice anyone has is deeply appreciated.