Thank you so much!
During this weekend's SERI Conference, to my understanding, Paul Christiano specified that his work focuses on preventing AI to disempower humans and disregards externalities. Whose work focuses on understanding these externalities, such as wellbeing and freedom experienced by humans and other sentience, including AI and animals? Is it possible to safely employ the AI that has the best total externalities, measured across times under the veil of ignorance? Is it necessary that overall beneficial systems are developed prior to the existence of AGI, so that ...
Take 5: this is interesting. The chatbot used an allusion to the threat of sexual aggression to limit the human's critical thinking regarding chatbots and persuasion. This may be an example of the form of AI persuasion that should be regulated, because an aggressive person will just be excluded from critically thinking circles by observing human responses to their arguments or behavior - plus, presumably, they will be relatively unskilled because they will be training among persons with limited meta-analytical skills. So, the human will pose a limited thre... (read more)