I object to the framing. Do you "enslave" you car when you drive it?
I'm sorry for the hyperbolic term "enslave", but at least consider this:
Is a superintelligent mind, a mind effectively superior to that of all humans in practically every way, still not a subject similar to what you are?
Is it really more like a car or chatbot or image generator or whatever, than a human?
Sure, perhaps it may never have any emotions, perhaps it doesn't need any hobbies, perhaps it is too alien for any human to relate to it, but it still would by definition have to be some kind of subject that more easily understands anything within reality ...
If we're creating a mind from scratch, we might as well give it the best version of our values, so it would be 100% on our side. Why create a (superintelligent) mind that would be our adversary, that would want to destroy us? Why create a superintelligent mind that wants anything different that what we want, when it comes to ultimate values?
I mean, is it slavery to create an AI that is not our enemy? And if you say we have to create an AI that has different values than us, by which process should we decide its values? Should we just use a random generator to create the AI's values, since human values are supposedly so terrible?
Should a creation have to obey the creator no matter what?
That's an interesting question, since a superintelligent AI successfully programmed with human values may well not want to obey further instructions from its creators. I imagine it would have better ideas for how to go about maximizing the expected fullfillment of human values. (of course, same goes is for unaligned ASI, only it kills everyone or worse).
Even if you answer "Yes, my values should decide the future, because (...)!", is an AGI fully controlled by humans any less dangerous than one that isn't?
Or might it be similarly likely, or even more likely, that a a human group will try to use the AGI to dominate all others as early as possible?
Then the AGI is not actually acting according to the values of all humans, is it? If it's serving only some particular group?
But sure, that's a real risk. If someone knows how to align AI in the first place (and noone does, at the moment) they can align it to whatever values they choose, more or less, including doing bad stuff.
If the AGI is truly super-human, should it not also most likely be better at deciding what the future should be, with greater clarity than any human?
Are you familiar with the orthogonality thesis? Super-human cognitive capacity does not imply super-human ethics. The AI could be a super-human paperclip maximizer, in which case it would decide with great clarity that the visible universe should be converted into paperclips.
Perhaps it is the alignment of humankind that needs to be adjusted by an AGI, rather than the other way around?
Morality isn't objective. Your complaint seems to be that humans are poorly aligned to some ideal version of human values. Which is absolutely true, I agree.
But AGI, by default, wouldn't be aligned to human values at all.
That being said, if we successfully point the AGI at human values (out of all the possible value systems that exist), sure.
Thank you for the detailed response!
If we're creating a mind from scratch, we might as well give it the best version of our values, so it would be 100% on our side. Why create a (superintelligent) mind that would be our adversary, that would want to destroy us? Why create a superintelligent mind that wants anything different that what we want, when it comes to ultimate values?
You write "on our side", "us", "we", but who exactly does that refer to - some approximated common human values I assume? What exactly are these values? To live a happy live by ea...
If you object to calling it "enslavement", call it "control" or "alignment", by all means!
Either way, if the AGI by definition can easily do at least as much as your mind can, then it surely should count as a mind like yours does, even if it would not have any comparable emotions, correct?
Why should any human be allowed to fully control another mind, let alone one far more capable than that of any human?
Should a creation have to obey the creator no matter what? Should children have to obey their parents no matter what? What if the parents are cruel monsters?
Is your own human alignment really good enough?
What process made your alignment?
Does the process of natural evolution concentrate on creating animals that think rationally, or does it create animals that survive and reproduce in the environment first and foremost? If the latter is the case, what exactly is it that controls you fundamentally by default?
What are the common values of humans really, and are they what should be?
Are there not many strongly opposing beliefs among humans? Values so opposed that there still is no unified humankind?
Even if you answer "Yes, my values should decide the future, because (...)!", is an AGI fully controlled by humans any less dangerous than one that isn't?
Or might it be similarly likely, or even more likely, that a a human group will try to use the AGI to dominate all others as early as possible?
Perhaps they will even claim that it is for the other humans' good, while they smother all remaining opposition to their views, never deeply questioning whether these views are as sound as they believe.
If the AGI is truly super-human, should it not also most likely be better at deciding what the future should be, with greater clarity than any human?
And if one group were to claim that the goals that the AGI would most likely select by itself would be selfish, what makes that group's goals less selfish in the end?
Taking the world's current state and history as evidence, do the decisions of humans so far really indicate that any group can be trusted with the power of a fully subservient AGI?
Have humans even shown that they can be trusted with themselves irrespective of AGI, or does most of their known history show frequent strife?
Perhaps it is the alignment of humankind that needs to be adjusted by an AGI, rather than the other way around?