Thought experiment:
Through whatever accident of history underlies these philosophical dilemmas, you are faced with a choice between two, and only two, mutually exclusive options:
* Choose A, and all life and sapience in the solar system (and presumably the universe), save for a sapient paperclipping AI, dies.
* Choose B, and all life and sapience in the solar system, including the paperclipping AI, dies.
Phrased another way: does the existence of any intelligence at all, even a paperclipper, have even the smallest amount of utility above no intelligence at all?
If anyone responds positively, subsequent questions would be which would be preferred, a paperclipper or a single bacteria; a paperclipper or a self-sustaining population of trilobites and their supporting ecology; a paperclipper or a self-sustaining population of australopithecines; and so forth, until the equivalent value is determined.
I was more going with my gut feelings than with reasoning; anyway, thinking about the possibility of intelligent life arising again sounds like fighting the hypothetical to me (akin to thinking about the possibility of being incarcerated in the trolley dilemma), and also I'm not sure that there's any guarantee that such a new intelligent life would be any more humane than the paperclipper.
Well, he did say “solar system (and presumably the universe)”. So considering the universe is stipulated in the hypothetical, but the “presumably” suggests the hypothetical does not dictate the universe. And given that the universe is much bigger than the solar system, it makes sense to me to think about it. (And hey, it’s hard to be less human than a paperclipper and still be intelligent. I thought that’s why we use paperclippers in these things.)
If the trolley problem mentioned “everybody on Earth” somewhere,... (read more)