An expected paperclip maximizer is an agent that outputs the action it believes will lead to the greatest number of paperclips existing. Or in more detail, its utility function is linear in the number of paperclips times the number of seconds that each paperclip lasts, over the lifetime of the universe. See http://wiki.lesswrong.com/wiki/Paperclip_maximizer.
The agent may be a bounded maximizer rather than an objective maximizer without changing the key ideas; the core premise is just that, given actions A and B where the paperclip maximizer has evaluated the consequences of both actions, the paperclip maximizer always prefers the action that it expects to lead to more paperclips.
Some key ideas that the notion of an expected paperclip maximizer illustrates: