Perplexed comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 30 January 2011 05:04:21PM 2 points [-]

Why would an AI which optimises for one thing create another AI that optimises for something else?

It wouldn't if it initially considered itself to be the only agent in the universe. But if it recognizes the existence of other agents and the impact of other agents' decisions on its own utility, then there are many possibilities:

  • The new AI could be created as a joint venture of two existing agents.
  • The new AI could be built because the builder was compensated for doing so.
  • The new AI could be built because the builder was threatened into doing so.

Building an AI with a different utility function is not going to satisfy the first AI's utility function!

This may seem intuitively obvious, but it is actually often false in a multi-agent environment.