It both is and isn't an entry level question. On the one hand, your expectation matches the expectation LW was founded to shed light on, back when EY was writing The Sequences. On the other hand, it's still a topic a lot of people disagree on and write about here and elsewhere.
There's at least two interpretations of your question I can think of, with different answers, from my POV.
What I think you mean is, "Why do some people think ASI would share some resources with humans as a default or likely outcome?" I don't think that and don't agree with the arguments I've seen put forth for it.
But I don't expect our future to be terrible, in the most likely case. Part of that is the chance of not getting ASI for one reason or another. But most of that is the chance that we will, by the time we need it, have developed an actually satisfying answer to "How do we get an ASI such that it shares resources with humans in a way we find to be a positive outcome?" None of us has that answer yet. But, somewhere out in mind design space are possible ASIs that value human flourishing in ways we would reflectively endorse and that would be good for us.
Humans as social animals have a strong instinctual bias towards trust of con-specifics in prosperous times. Which makes sense from a game theoretic strengthen-the-tribe perspective. But I think that leaves us, as a collectively dumb mob of naked apes, entirely lacking a sensible level of paranoia in the building ASI that has no existential need for pro-social behavior.
The one salve I have for hopelessness is that perhaps the Universe will be boringly deterministic and 'samey' enough that ASI will find it entertaining to have agentic humans wandering around doing their mildly unpredictable thing. Although maybe it will prefer to manufacture higher levels of drama (not good for our happiness)
I am really sorry if this is a very entry-level question, but it has been on my mind a lot recently and I haven't find any satisfying answers. It is addressed only to those who expect our future not to be totally terrible.
Let's assume for a second that we manage to create Artificial Superintelligence (ASI). Let's also assume that ASI takes over our planet. In this scenario, why would ASI not do either one of the following things: 1) Exploit humans in pursuit of its own goals, while giving us the barest minimum to survive (effectively making us slaves) or 2) Take over the resources of the entire solar system for itself and leave us starving without any resources?
Under such a scenario, why would we expect human lives to be any good (much less a utopia)?