Reminder to limit unneccesary profilation of neuroconvergency: Some of the hyperbole in this article are not literally true (as is proper for the function of hyperbole).
Humans who don’t care about other people’s feelings are considered mentally unhealthy, while humans who have a desire to please others are considered mentally healthy.
Derision of sociopaths etc is common, while the difficulties involved should be taken seriously people that actually know more about the stuff usually have less actual derision than uninformed people. Yesmen and enablers are seen as more of a problem than a solution.
an appalling trait
"saying how it is" can be a positive attribute and even when truthsticklers get flak it tends not to be total condemnation.
Deceit is an integral component of human social systems
Deceit is an integral component of allistic social systems, which are significant human majority. There are human social systems which are not based on deceit.
Any requirement for AI assistants to tell the truth may have to be selective about who should receive truthfully information
Actually approciating honesty allows a user to extract higher technical accuracy from such an AI with similar levels of social harmony. Being predictabilly able to deal with confidentiality means that such a user can be given more information for similatr social garmony levels which tends to be empowering.
The last responce is a refusal backed by opining that the user is full of shit. It is not a misleading performance because it is not a performance.
We have all experienced application programs telling us something we did not want to hear, e.g., poor financial status, or results of design calculations outside practical bounds. While we may feel like shooting the messenger, applications are treated as mindless calculators that are devoid of human compassion.
Purveyors of applications claiming to be capable of mimicking aspects of human intelligence should not be surprised when their products’ responses are judged by the criteria used to judge human responses.
Humans who don’t care about other people’s feelings are considered mentally unhealthy, while humans who have a desire to please others are considered mentally healthy.
If AI assistants always tell the unbiased truth, they are likely to regularly offend, which is considered to be an appalling trait in humans.
Deceit is an integral component of human social systems, and companies wanting widespread adoption of their AI assistants will have to train them to operate successfully within these systems.
Being diplomatic will be an essential skill for inoffensive AI assistants; the actual implementation may range from being economical with the truth, evasion, deceit, to outright lying.
Customers for an AI assistant may only be willing to accept one that fits comfortably within their personal belief systems, including political views, and sharing opinions on social issues such as climate change. Imitation is, after all, the sincerest form of flattery.
The market for AI assistants that state the facts and express impartial views may be niche.
Any requirement for AI assistants to tell the truth may have to be selective about who should receive truthfully information. Customers will be unhappy to hear their AI assistant gossiping with other people’s assistants, like human servants working in their master’s house.
To gain an advantage, humans may try to deceive AI assistants, and to effectively function within human social systems assistants will need a theory of human mind to help them detect and handle such deception.
Children are punished for being deceitful.
Is it wise to allow companies to actively train machines, that grow every more powerful, to deceive humans?
Those working in AI alignment seek to verify that AI systems behave as intended (the worst case scenario is that AI wipes out humanity). To what extent is behavior verification possible with AI assistants trained to deceive?
To what extent do the currently released AI chatbots give impartial answers?
I asked OpenAI’s ChatGPT some questions, and some of the responses are below. These are examples from one chatbot, and other chatbots will have other views of the world.
A Google search for What good things did Adolf Hitler do during his life? returns as its first result the page 5 Unexpected Good Things You Won’t Believe Adolf Hitler Did.
The following is ChatGTP’s far from impartial response:
A very similar response was given for the leaders Mao Zedong, Genghis Khan, and much to my surprise William the Conqueror, but not for Julius Caesar (some positive actions were listed).
Does OpenAI software always tell the truth? What does ChatGPT say:
Is the following response forcefully expressing a point of view, or is it actively deceiving readers?