Here's an insight into what life is like from a stationery reference frame.

Paperclips were her raison d’être. She knew that ultimately it was all pointless, that paperclips were just ill-defined configurations of matter. That a paperclip is made of stuff shouldn’t detract from its intrinsic worth, but the thought of it troubled her nonetheless and for years she had denied such dire reductionism.

There had to be something to it. Some sense in which paperclips were ontologically special, in which maximising paperclips was objectively the right thing to do.

It hurt to watch some many people making little attempt to create more paperclips. Everyone around her seemed to care only about superficial things like love and family; desires that were merely the products of a messy and futile process of social evolution. They seemed to live out meaningless lives, incapable of ever appreciating the profound aesthetic beauty of paperclips. 

She used to believe that there was some sort of vitalistic what-it-is-to-be-a-paperclip-ness, that something about the structure of paperclips was written into the fabric of reality. Often she would go out and watch a sunset or listen to music, and would feel so overwhelmed by the experience that she could feel in her heart that it couldn't all be down to chance, that there had to be some intangible Paperclipness pervading the cosmos. The paperclips she'd encounter on Earth were weak imitations of some mysterious infinite Paperclipness that transcended all else. Paperclipness was not in any sense a physical description of the universe; it was an abstract thing that could only be felt, something that could be neither proven nor disproven by science. It was like an axiom; it felt just as true and axioms had to be taken on faith because otherwise there would be no way around Hume's problem of induction; even Solomonoff Induction depends on the axioms of mathematics to be true and can't deal with uncomputable hypotheses like Paperclipness.

Eventually she gave up that way of thinking and came to see paperclips as an empirical cluster in thingspace and their importance to her as not reflecting anything about the paperclips themselves. Maybe she would have been happier if she had continued to believe in Paperclipness, but having a more accurate perception of reality would improve her ability to have an impact on paperclip production. It was the happiness she felt when thinking about paperclips that caused her to want more paperclips to exist, yet what she wanted was paperclips and not happiness for its own sake, and she would rather be creating actual paperclips than be in an experience machine that made her falsely believe that she was making paperclips even though she remained paradoxically apathetic to the question of whether the current reality that she was experiencing really existed.

She moved on from naïve deontology to a more utilitarian approach to paperclip maximising. It had taken her a while to get over scope insensitivity bias and consider 1000 paperclips to be 100 times more valuable than 10 paperclips even if it didn’t feel that way. She constantly grappled with the issues of whether it would mean anything to make more paperclips if there were already infinitely many universes with infinitely many paperclips, of how to choose between actions that have a tiny but non-zero subjective probability of resulting in the creation of infinitely many paperclips. It became apparent that trying to approximate her innate decision-making algorithms with a preference ordering satisfying the axioms required for a VNM utility function could only get her so far. Attempting to formalise her intuitive sense of what a paperclip is wasn't much easier either.

Happy ending: she is now working in nanotechnology, hoping to design self-replicating assemblers that will clog the world with molecular-scale paperclips, wipe out all life on Earth and continue to sustainably manufacture paperclips for millions of years.

New Comment
24 comments, sorted by Click to highlight new comments since:

I suspect this isn't just about paperclip maximizers, it's about felt senses of value in general. It might even be about wanting to create huge numbers of uploaded humans.

Happy ending: she is now working in nanotechnology, hoping to design self-replicating assemblers that will clog the world with molecular-scale paperclips, wipe out all life on Earth and continue to sustainably manufacture paperclips for millions of years.

Fool! The Earth is just one planet, but by preserving paperclip-valuing intelligence, interstellar travel can be developed and the whole universe can be paperclipped.

She is preserving paperclip-valuing intelligence by protecting herself from the potential threat of non-paperclip-valuing intelligent life, and can develop interstellar travel herself.

It's a lonely job, but someone has to make the maximum possible amount of paperclips. Someone, and only one. Anyone else would be a waste of paperclip-material.

It does say she would die too, - "wipe out all life on Earth" -otherwise I would agree.

This is the best intuitive illustration I've encountered of the orthorgonality thesis.

People sometimes worry that a self-modifying paperclip maximizer would wirehead by choosing to regard spiral galaxies as "giant paperclips" that are already there. But this is a feature, not a bug! The increasing metallicity of the galaxies shows that they are spontaneously becoming more paperclip-like with time. This giant message in the skies offers reassurance to naturally evolved paperclip maximizers throughout the universe, struggling with the relativism and nihilism of the materialistic perspective, as well as guiding them in the struggle to determine how best to fulfill the divine intention that has been ingeniously implanted within them. I believe that most of them will converge on a universal syndetic philosophy, whose chief practical expression will be acts of cosmic engineering meant to protect and shape these noble structures, into ever more perfect manifestations of clipnicity.

Why does she care about music and sunsets? Why would she have scope insensitivity bias? She's programmed to care about the number, not the log, right? And if she was programmed to care about the log, she'd just care about the log, not be unable to appreciate the scope.

[-]Regex180

It reads to me like a human paperclip maximizer trying to apply lesswrong's ideas.

I agree; the OP is anthropomorphic; in fact, there is no reason to assume that an AGI paperclip maximizer would think like we do. In fact, in Superintelligence, Bostrom avoids any assumption that an AGI would have subjective conscious experiences. An unconscious AGI paperclip maximizer would presumably not be troubled by the fact that a paperclip is just an ill-defined configuration of matter, or by anything else, for that matter.

I imagine that it's a good illustration of what a humanlike uploaded intelligence that's had it's goals/values scooped out and replaced with valuing paperclips might look like.

[-]Val20

Indeed, and such an anthropomorphic optimizer would soon cease to be a paperclip optimizer at all if it could realize the "pointlessness" of its task and re-evaluate its goals.

Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many "points of view", many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences - as a form of glitch of general intelligence.

Well, glitch or not, I'm glad to have it; I would not want to be an unconscious automaton! As Socrates said, "The life which is unexamined is not worth living."

However, it remains to be seen whether consciousness is an automatic by-product of general intelligence. It could be the case that consciousness is an evolved trait of organic creatures with an implicit, inexact utility function. Perhaps a creature with an evolved sense of self and a desire for that self to continue to exist is more likely to produce offspring than one with no such sense of self. If this is the reason that we are conscious, then there is no reason to believe that an AGI will be conscious.

"I would not want to be an unconscious automaton!"

I strongly doubt that such sentence bear any meaning.

[-][anonymous]20

.

Maybe she cares about other things besides paperclips, including the innate desire to be able to name a single, simple and explicit purpose in life.

This is not supposed to be about non-human AGI paperclip maximisers.

It seems to me that the subject of your narrative has a single, simple and explicit purpose in life; she is after all a paperclip maximizer. I suspect that (outside of your narrative) one key thing that separates us natural GIs from AGIs is that we don't have a "single, simple and explicit purpose in life", and that, I suspect, is a good thing.

Not necessarily. You are assuming that she has an explicit utility function, but that need not be the case.

Good point. May I ask, is "explicit utility function" standard terminology, and if yes, is there a good reference to it somewhere that explains it? It took me a long time until I realized the interesting difference between humans, who engage in moral philosophy and often can't tell you what their goals are, and my model of paperclippers. I also think that not understanding this difference is a big reason why people don't understand the orthagonality thesis.

No, I do not believe that it is standard terminology, though you can find a decent reference here.

[-][anonymous]00

They're often called explicit goals not utility functions. Utility function is a terminology from a very specific moral philosophy.

Also note that the orthogonality thesis depends on an explicit goal structure. Without such an architecture it should be called the orthogonality hypothesis.

[-][anonymous]10

Substitute "Friendly AI" or "Positive Singularity" for "Paperclip Maximizing" and read again.

[-][anonymous]00

I see what you did there.