JoshuaFox comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong

8 Post author: KatjaGrace 11 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaFox 11 November 2014 10:29:08AM 1 point [-]

Evolved agents would be in rough equality to other agents. So, their game-theoretic considerations would be different from an artificial agent. The artificial agent could have a design very different from all other agents and also could far surpass other agents. Neither of these is possible in evolution.

In fact, because of the similarity between evolved agents in any given ecosystem, these game-theoretic considerations include not only the possibility of reciprocity or reciprocal altruism, but also the sort of acausal reciprocal morality explored by Drescher and MIRI -- "you are like me, so my niceness is correlated with yours, so I'd better ask nicely."