Well, I guess it comes down to the evolutionary niches that produce intelligence and morality, doesn't it? There doesn't seem to be any single widely-accepted answer for either of them, although there are plenty of theories, some of which overlap, some don't.
Then again, we don't even know how different they would be biologically, so I'm unwilling to make any confidant pronouncement myself, other than professing skepticism for particularly extreme ends of the scale. (Aliens would be humanoid because only humans evolved intelligence!)
Anyway, do you think the arguments for your position are, well, strong? Referring to it as an "opinion" suggests not, but also suggests the arguments for the other side must be similarly weak, right? So maybe you could write about that.
I appeal to (1) the consideration of whether inter-translatability of science, and valuing of certain theories over others, depends on the initial conditions of civilization that develops it. (2) Universality of decision-theoretic and game-theoretic situations. (3) Evolutionary value of versatility hinting at evolved value of diversity.
Thought experiment:
Through whatever accident of history underlies these philosophical dilemmas, you are faced with a choice between two, and only two, mutually exclusive options:
* Choose A, and all life and sapience in the solar system (and presumably the universe), save for a sapient paperclipping AI, dies.
* Choose B, and all life and sapience in the solar system, including the paperclipping AI, dies.
Phrased another way: does the existence of any intelligence at all, even a paperclipper, have even the smallest amount of utility above no intelligence at all?
If anyone responds positively, subsequent questions would be which would be preferred, a paperclipper or a single bacteria; a paperclipper or a self-sustaining population of trilobites and their supporting ecology; a paperclipper or a self-sustaining population of australopithecines; and so forth, until the equivalent value is determined.