Michaelos comments on Wanted: backup plans for "seed AI turns out to be easy" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (62)
I think an Arbitration/Negotiation AI would be an interesting seed. The AI is given the ability to listen to disagreements to determine if two people disagree on some matter. It then uses their contact information and attempts to get them to come to an agreement by presenting various arguments. If it manages to have those two people both agree that they agree, it gains utility. It gains a substantially larger utility boost the first time it solves a type of argument then it does from subsequent solves. Perhaps a function like "Each subsequent solve of an argument offers half as much utility as the first solve." The AI is also explicitly prevented from suggesting violence to prevent "You two should fight to the death and both the winner and the loser will agree that winner is right and I have solved the argument." type scenarios. Any hint of the AI suggesting violence is an immediate externally imposed shutdown. Even if the situation really could be solved by violence in this particular case, shutdown anyway.
This appears to have a few good points:
Before you even get to that point though, it would probably be easier to build a Seed Seed Program that is capable of understanding that a disagreement is taking place in a formal text only setting where both people are rational and the disagreement is known to be solvable (One person is pretending to be under certain misconceptions that he will willingly acknowledge if the program asks.) and the program simply ends once agreement is reached.