I wrote a few controversial articles on LessWrong recently that got downvoted. Now, as a consequence, I can only leave one comment every few days. This makes it totally impossible to participate in various ongoing debates, or even provide replies to the comments that people have made on my controversial post. I can't even comment on objections to my upvoted posts. This seems like a pretty bad rule--those who express controversial views that many on LW don't like shouldn't be stymied from efficiently communicating. A better rule would probably be just dropping the posting limit entirely.
We don't know for certain if all AI superintelligence will be empathetic (not all humans are empathetic), but we do know that it's training on human data where that is an aspect of what it would learn along with all the other topics covered in the corpus of human knowledge. The notion that it will immediately be malevolent to match up with a sci-fi fantasy for no good reason seems like a fictional monster rather than a superintelligence.
It would have to be an irrational AI to follow the Doomer script who are themselves irrational when they ignore the mitigating factors against AI apocalypse or do a lot of hand waving.
It's a scale. You're intelligent and you could declare war on chimpanzees but you mostly ignore them. You share 98.8% of your DNA with chimpanzees and yet you to my knowledge you never write about them or go and visit them. They have almost no relevance to your life.
The gap between an AI superintelligence and humans will likely be larger than the gap between humans and chimpanzees. The idea that they will follow the AI doomer script seems like a very, very low probability. And if anyone truly believed this they would be an AI nihilist and it would be mostly a waste of time worrying since by their own admission there is nothing we could do to prevent their own doom.
We don't know if this is a simulation. If consciousness is computable then we know we could create such a simulation without understanding how base reality works. However, a separate question is whether a binary program language is capable of simulating anything absent consciousness. The numbers cannot do anything on their own -- since they're an abstraction. A library isn't conscious. It requires a conscious observer to meaning anything. Is it possible for language (of any kind) to simulate anything without a conscious mind encoding the meanings? I am starting to think the answer is "no".
This is all speculation and until we a better understanding of consciousness and energy we probably won't have a suitable answer. We know that the movement of electricity through neurons and transistor can give rise to claims of phenomenal consciousness but whether that's an emergent property or something more fundamental is an open question.