I think that humans are sorta "unaligned", in the sense of being vulnerable to Goodhart's Law.
A lot of moral philosophy is something like:
The resulting ethical system often ends up having some super bizarre implications and usually requires specifying "free variables" that are (arguably) independent of our original moral intuitions.
In fact, I imagine that optimizing the universe according to my moral framework looks quite Goodhartian to many people.
Some examples of implications of my current moral framework:
I'm sure there are many other examples.
I don't think that my conclusions are wrong per se, but... my ethical system has some alien and potentially degenerate implications when optimized hard.
It's also worth noting that although I stated those examples confidently (for rhetorical purposes), my stances on many of them depend on very specific details of my philosophy and have toggled back and forth many times.
No real call to action here, just some observations. Existing human ethical systems might look as exotic to the average person as some conclusions drawn by a kinda-aligned SAI.