jacob_cannell comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 29 January 2011 11:04:45PM -2 points [-]

there's no a priori reason to care about what other "agents" present in your causal past (light cone!) "cared" about

Nor is there an a priori reason for an AI to exist, for it to understand what 'paperclips' are, let alone for it to self-improve through learning like a human child does, absorb human languages, and upgrade itself to the extent necessary to take over the world.

I suspect that any team of scientists or engineers with the knowledge and capability required to build an AGI with at least human-infant level cognitive capacity and the ability to learn human language will understand that making the AI's goal system dynamic is not only advantageous, but is necessitated in practice by the cognitive capabilities required for understanding human language.

The idea of a paperclip maximizer taking over the world is a mostly harmless absurdity, but it also detracts from serious discussion.