jacob_cannell comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 29 January 2011 10:57:01AM *  14 points [-]

If you're a seed AI and you're somewhat confused about what your creator meant when they said "valueOfCreatingPaperclips=infinity", so you do what you think they were trying to get you to do, which was to create economic value by making paperclips, and the reason they wanted to do that was to make a profit for themselves, and the reason for that is they're part of this larger system called humanity which is following this strange vector in preferencespace...

And the reason you value friendship is that "evolution" "made" it so, following the Big Bang. Informal descriptions of physical causes and effects don't translate into moral arguments, and there's no a priori reason to care about what other "agents" present in your causal past (light cone!) "cared" about, no more than caring about what they "hated", or even to consider such a concept.

(I become more and more convinced that you do have a serious problem with the virtue of narrowness, better stop the meta-contrarian nonsense and work on that.)

Comment author: jacob_cannell 29 January 2011 11:04:45PM -2 points [-]

there's no a priori reason to care about what other "agents" present in your causal past (light cone!) "cared" about

Nor is there an a priori reason for an AI to exist, for it to understand what 'paperclips' are, let alone for it to self-improve through learning like a human child does, absorb human languages, and upgrade itself to the extent necessary to take over the world.

I suspect that any team of scientists or engineers with the knowledge and capability required to build an AGI with at least human-infant level cognitive capacity and the ability to learn human language will understand that making the AI's goal system dynamic is not only advantageous, but is necessitated in practice by the cognitive capabilities required for understanding human language.

The idea of a paperclip maximizer taking over the world is a mostly harmless absurdity, but it also detracts from serious discussion.