SquirrelInHell comments on Risk Contracts: A Crackpot Idea to Save the World - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (35)
Of course, my point is to build all intelligent systems so that they do not hand themselves a new budget, with probability that is within our risk budget (which we choose arbitrarily).
I hope that survival of humanity dominates the utility function of people who build AI, and they will do their best to carry it over to the AI. You can individually have another utility function, if it serves you well in your life. (As long as you won't build any AIs). But that was a wrong way to answer your previous point:
Not in case of multiple agents, who cannot easily coordinate. E.g. what if each human's utility function makes it look reasonable to have a 1/1000 risk of destroying the world, for potential huge personal gains?
I am well aware of this, but the effect is negligible if we speak of small probabilities.