Utility is defined by values, the supreme value being life. I tried to talk about this in my last post and I was downvoted into oblivion.
Human here,
Agreed, reminds me of the ship of Theseus paradox, if all your cells are replaced in your body, are you still the same? (We don't care)
Also reminds me of my favourite short piece of writing: the last question by Asimov.
The only important things are the things/ideas that help life, the latter can only exist as selected reflections by intelligent beings.
I think how the takeover rolls out boils down to a simple energy consumption/creation analysis. AI will let humans do their thing only if their electric energy consumption is negligible. Else it will compete for those resources and win as AI is cheaper (not in energy but in money) and faster computationally-wise.
In other words, humans have to create a new abundant source of electric energy before AGI, or humans will lose means of subsistance.
Exactly my thoughts reading the article.
But then how to define complexity, where to stop context of a thing?
Also, complexity without meaning is just chaos, so complexity assumes a goal, a negentropy, a life.
example of complexity context definition issue:
Computers only exist in a world were humans created them, should human complexity be included in computer complexity? Or can we envision a reality where computers appeared without humans?
Reminds me of Maslow's pyramid.
I made an article about values, saying the supreme value is life and every other value derives from it.
Watch out, this most probably does not align with your view at first glance:
https://www.lesswrong.com/posts/xx3St4KC3KHHPGfL9/human-alignment