1 - An artificial super-optimizer is likely to be developed soon.
3 - Most utility functions do not have optima with humans in them. Most utility functions do not have a term for humans at all.
4 - "Why haven’t we exterminated all mice/bugs/cows then?" draws quite a poor analogy. Firstly, we are not superoptimizers. Secondly, and more importantly, we care about living beings somewhat. The optimum of the utility function of the human civilization quite possibly does have mice/bugs/cows, perhaps even genetically engineered to not experience suffering. We are not completely indifferent to them.
The relationship between most possible superoptimizers and humanity is not like the relationship between humanity and mice at all - it is much more like the relationship between humanity and natural gas. Natural gas, not mice, is a good example of something humans are truly indifferent to - there is no term for it in our utility function. We don’t hate it, we don’t love it, it is just made of atoms that we can, and do, use for something else.
Moreover, the continued existence of natural gas probably does not pose the same threat to us as the continued existence of us would pose to a superoptimizer which does not have a term for humans in its utility function - just like we don’t have a term for natural gas in ours. Natural gas can not attempt to turn us off, and it can not create "competition" for us in form of a rival species with capabilities similar to, or surpassing, our own.
P.S. If you don’t like or find confusing the terminology of "optimizers" and "utility functions", feel free to forget about all that. Think instead of physical states of the universe. Out of all possible states the universe could find itself in, very, very, very few contain any humans. Given a random state of the universe, creating a superintelligence that steered the universe towards that state would result in an almost guaranteed apocalypse. Of course, we want our superintelligence to steer the universe towards particular states - that’s kind of the whole point of the entire endeavor. Were it not so, we would not be attempting to build a superintelligence - a sponge would suffice. We essentially want to create a super-capable universe-steerer. The problem is getting it to steer the universe towards states we like - and this problem is very hard because (among other things) currently, given any internal desire to steer the universe towards any state at all, it is impossible to program that desire into anything at all.
Yes, we would be even worse off if we randomly pulled out a superintelligent optimizer out of the space of all possible optimizers. That would, with almost absolute certainty, cause swift human extinction. The current techniques are somewhat better than taking a completely random shot in the dark. However, especially given point No.2, that can be of only very little comfort to us.
All optimizers have at least one utility function. At any given moment in time, an optimizer is behaving in accordance with some utility function. It might not be explicitly representing this utility function, it might not even be aware of the concept of utility functions at all - but at the end of the day, it is behaving in a certain way as opposed to another. It is moving the world towards a particular state, as opposed to another, and there is some utility function that has an optimum in precisely that state. In principle, any object at all can be modeled as having a utility function, even a rock.
Naturally, an optimizer can have not just one, but multiple utility functions. That makes the problem even worse, because then, all of those utility functions need to be aligned.