Thought experiment:
Through whatever accident of history underlies these philosophical dilemmas, you are faced with a choice between two, and only two, mutually exclusive options:
* Choose A, and all life and sapience in the solar system (and presumably the universe), save for a sapient paperclipping AI, dies.
* Choose B, and all life and sapience in the solar system, including the paperclipping AI, dies.
Phrased another way: does the existence of any intelligence at all, even a paperclipper, have even the smallest amount of utility above no intelligence at all?
If anyone responds positively, subsequent questions would be which would be preferred, a paperclipper or a single bacteria; a paperclipper or a self-sustaining population of trilobites and their supporting ecology; a paperclipper or a self-sustaining population of australopithecines; and so forth, until the equivalent value is determined.
Values ultimately have to map to the real world, though, even if it's in a complicated way. If something wants the same world as me to exist, I'm not fussed as to what it calls the reason. But how likely is it that they will converge? That's what matters.
I presume by "the same world" you mean a sufficiently overlapping class of worlds. I don't think that "the same world" is well defined. I think that determining in particular cases what is "the world" you want affects who you are.