Thought experiment:
Through whatever accident of history underlies these philosophical dilemmas, you are faced with a choice between two, and only two, mutually exclusive options:
* Choose A, and all life and sapience in the solar system (and presumably the universe), save for a sapient paperclipping AI, dies.
* Choose B, and all life and sapience in the solar system, including the paperclipping AI, dies.
Phrased another way: does the existence of any intelligence at all, even a paperclipper, have even the smallest amount of utility above no intelligence at all?
If anyone responds positively, subsequent questions would be which would be preferred, a paperclipper or a single bacteria; a paperclipper or a self-sustaining population of trilobites and their supporting ecology; a paperclipper or a self-sustaining population of australopithecines; and so forth, until the equivalent value is determined.
True. What I said was in reference to
Within a system of self-replicating information...maybe, just maybe, you'll start getting little selfish bits that are more concerned with replicating themselves than they are with making paperclips. It all starts from there.
Assuming, of course, that the greater part of the paperclipper doesn't just find a way to crush these lesser selfish pieces. They're basically cancer.
Oh, OK then. On this site I usually understand “paperclipper” to mean “something that will transform all the universe into paperclips unless stopped by someone smarter than it”, not just “something really good at making paperclips without supervision”. Someone please hit me with a clue stick if I’ve been totally wrong about that.