Thought experiment:
Through whatever accident of history underlies these philosophical dilemmas, you are faced with a choice between two, and only two, mutually exclusive options:
* Choose A, and all life and sapience in the solar system (and presumably the universe), save for a sapient paperclipping AI, dies.
* Choose B, and all life and sapience in the solar system, including the paperclipping AI, dies.
Phrased another way: does the existence of any intelligence at all, even a paperclipper, have even the smallest amount of utility above no intelligence at all?
If anyone responds positively, subsequent questions would be which would be preferred, a paperclipper or a single bacteria; a paperclipper or a self-sustaining population of trilobites and their supporting ecology; a paperclipper or a self-sustaining population of australopithecines; and so forth, until the equivalent value is determined.
What you care about is not obviously the same thing as what is valuable to you. What's valuable is a confusing question that you shouldn't be confident in knowing a solution to. You may provisionally decide to follow some moral principles (for example in order to be able to exercise consequentialism more easily), but making a decision doesn't necessitate being anywhere close to being sure of its correctness. The best decision that you can make may still in your estimation be much worse than the best theoretically possible decision (here, I'm applying this observation to a decision to provisionally adopt certain moral principles).