# private_messaging comments on Tool for maximizing paperclips vs a paperclip maximizer - Less Wrong Discussion

3 12 May 2012 07:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 12 May 2012 02:06:47PM *  1 point [-]

Hmm, in my view it is more of a goal distinction than abilities distinction.

The model popular here is that of 'expected utility maximizer', and the 'utility function' is defined on the real world. Then the agent does want to build most accurate model of the real world, to be able to maximize that function the best, and the agent tries to avoid corruption of the function, etc. It also wants it's outputs to affect the world, and if put in a box, will try to craft output to do things in the real world even if you only wanted to look at them.

This is all very ontologically basic to humans. We easily philosophize about such stuff.

Meanwhile, we don't know how to do that. We don't know how to reduce that world 'utility' to elementary operations performed on the sensory input (neither directly nor on meta level). The current solution involves making some part that creates/updates mathematically defined problem that other part finds mathematical solutions to, and then the solutions are shown if it is a tool or get applied to the real world if it isn't. The wisdom of applying those solutions to the real world is an entirely separate issue. The point is that the latter works like a tool if boxed, not like a caged animal (or a caged human).

edit: another problem i think is that many of the 'difficulty of friendliness' arguments are just special cases of general 'difficulty of world intentionality'.

Comment author: 13 May 2012 10:50:50AM 2 points [-]

The model popular here is that of 'expected utility maximizer', and the 'utility function' is defined on the real world.

I think this is a bit of a misperception stemming from the use of the "paperclip maximizer" example to illustrate things about instrumental reasoning. Certainly folk like Eliezer or Wei Dai or Stuart Armstrong or Paul Christiano have often talked about how a paperclip maximizer is much of the way to FAI (in having a world-model robust enough to use consequentialism). Note that people also like to use the AIXI framework as a model, and use it to talk about how AIXI is set up not as a paperclip maximizer but a wireheader (pornography and birth control rather than sex and offspring), with its utility function defined over sensory inputs rather than a model of the external world.

For another example, when talking about the idea of creating an AI with some external reward that can be administered by humans but not as easily hacked/wireheaded by the AI itself people use the example of an AI designed to seek factors of certain specified numbers, or a proof or disproof of the Riemann hypothesis according to some internal proof-checking mechanism, etc, recognizing the role of wireheading and the difficulty of specifying goals externally rather than using simple percepts and the like.