Carl_Shulman comments on Invisible Frameworks - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (40)
"And again, I bet Roko didn't even consider "destroy all other agents" as a candidate UIV because of anthropomorphic optimism." I had to point it out, but I think he may endorse it.
From Roko's blog:
Roko said...
Me: If the world is like this, then a very large collection of agents will end up agreeing on what the "right" thing to do is.
Carl: No, because the different agents will have different terminal aims. If Agent X wants to maximize the amount of suffering over pleasure, while Agent Y wants to maximize the amount of pleasure over pain, then X wants agents with X-type terminal values to acquire the capabilities Omohundro discusses while Agent Y wants Y-type agents to do the same. They will prefer that the total capabilities of all agents be less if this better leads to the achievement of their particular ends.
Roko: - ah, it seems that I have introduced an ambiguity into my writing. What I meant was:
If the world is like this, then, for a very large set of agents, each considered in isolation, the notion of the "right" thing to do is will end up being the same