I think I came up with a good utility function for AI that seems too obvious. Can you people poke holes in it?
Basically, the AI does the following: Create a list of possible futures that it could cause. For each person and future at the time of the AI's activation: 1. Simulate convincing that person that the future is going to happen. 2. If the person would try to help the AI,...
Aug 28, 20191