TheOtherDave comments on Holden's Objection 1: Friendliness is dangerous - Less Wrong

11 Post author: PhilGoetz 18 May 2012 12:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (428)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 24 May 2012 02:54:25PM 0 points [-]

I no longer know what the words "intelligence," "AI", and "AGI" actually refer to in this conversation, and I'm not even certain the referents are consistent, so let me taboo the whole lexical mess and try again.

For any X, if the existence of X interferes with an agent A achieving its goals, the better A is at optimizing its environment for its goals the less likely X is to exist.

For any X and A, the more optimizing power X can exert on its environment, the more likely it is that the existence of X interferes with A achieving its goals.

For any X, if A values the existence of X, the better A is at implementing its values the more likely X is to exist.

All of this is as true for X=intelligent beings as X=AI as X=AGI as X=pie.

Comment author: DanArmak 24 May 2012 03:16:18PM 0 points [-]

As far as I can see, this is all true and agrees with everything you, I and thomblake have said.

Comment author: TheOtherDave 24 May 2012 03:24:05PM 0 points [-]

Cool.
So it seems to follow that we agree that if agent A1 values the existence of distinct agents A2..An, it's unclear how the likelihood of A2..An existing varies with the optimizing power available to A1...An. Yes?

Comment author: DanArmak 24 May 2012 06:02:10PM *  0 points [-]

Yes. Even if we know each agent's optimizing power, and each agent's estimation of each other agent's power and ability to acquire greater power, the behavior of A1 still depends on its exact values (for instance, what else it values besides the existence of the others). It also depends on the values of the other agents (might they choose to initiate conflict among themselves or against A1?)