Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
brb24310

Take 5: this is interesting. The chatbot used an allusion to the threat of sexual aggression to limit the human's critical thinking regarding chatbots and persuasion. This may be an example of the form of AI persuasion that should be regulated, because an aggressive person will just be excluded from critically thinking circles by observing human responses to their arguments or behavior - plus, presumably, they will be relatively unskilled because they will be training among persons with limited meta-analytical skills. So, the human will pose a limited threat themselves - furthermore, they will be easily influenced by explanations on how to be better accepted in circles who seek the truth and are cognizant of biases, because they may seek to exist in environments that do not use negative emotions to manipulate, either by moving or by offering the standards. An AI, on the other hand, will not be excluded from these circles (e. g. ad at a square frequented by affluent decisionmakers), because it is not responsive to persons' hints or explicit statements of disliking (perhaps on the contrary - the more attention, even 'explanations why it should leave,' the better for an attention optimizer). Plus, it will be highly skilled, because it will train with large amounts of data that can substitute human meta-analysis and/generated by experts who specialize in attention captivation. So, persuasive AI can deteriorate society fast, if it influences decisionmakers to be similarly aggressive or otherwise inconsidering overall wellbeing, if this is the definition of deterioration.

Even the 'hedging' language, alongside the lines of 'are you sure you want to hear this what you do not want to hear' alludes to the chatbot's unwanted intrusion of the human's mind - another way to make persons submit - they would not seek to admit that they did not want to have this information so they would be more likely to repeat it without critically thinking about the content - acting more impulsively, as in fear, due to a biological reaction - which AI is immune to.

brb24350

During this weekend's SERI Conference, to my understanding, Paul Christiano specified that his work focuses on preventing AI to disempower humans and disregards externalities. Whose work focuses on understanding these externalities, such as wellbeing and freedom experienced by humans and other sentience, including AI and animals? Is it possible to safely employ the AI that has the best total externalities, measured across times under the veil of ignorance? Is it necessary that overall beneficial systems are developed prior to the existence of AGI, so that it does not make decisions unfavorable to some entities? Alternatively, to strive for an overall favorability situation development with an AGI safely governed by humans otherwise dystopic scenarios could occur for some individuals?