You know the saying: “I just want him to listen not to solve my problem”
Would an all-reaching AGI accept this?
How would AGI respond to self-sabotaging tendencies?
If it's true that world problems stem from individual preferences which are based on individual perspectives that originate from individual personality traits and experiences, where will an AGI that's hellbent to improve the world stop and accept things as they are and realize that any attempt to improve things may cause more damage to humans than good?
Take social media algorithm for context, keeping everyone in a relatively closed bubble it believes each person should be in and guides each person's decisions by determining what information to show to each person.
And with the advent of the internet of things, AI will become more prevalent.
While there may be AI specifically designed to offer emotional support and AI designed to solve problems, will an AGI that may develop some sort of consciousness simply accept some of the human character flaws and limitations, or will it strip it all away at the risk of hurting the human until the singularity of what is considered acceptable is achieved?
Would an AGI always act from a place of pure rationality and maximum efficiency, in disregard of some human values that prevent us from doing this most of the time?
What is the question? It seems to have something to do with AGI intervening in personality disorders, but why? AGI aside, considering the modification of humans to remove functionality that's undesirable to oneself it's not at all clear where one would stop. Some would consider human existence (and propagation) to be undesirable functionality that the user is poorly equipped to recognize or confront. Meddling in personality disorders doesn't seem relevant at this stage.
My main concern is:
Humans can be irrational and illogical, allowing them to let things slide for better or for worse. There are also psychological and reach limitations that put a hard cap on them somewhere.
An AGI will most likely do everything it does rationally and logically. Including emotions. And this may be detrimental to most humans.
Yes