Houshalter comments on Open thread, Aug. 10 - Aug. 16, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (283)
Do Artificial Reinforcement-Learning Agents Matter Morally?
I've read this paper and find it fascinating. I think it's very relevant to Lesswrong's interests. Not just that it's about AI,but also that it asks hard moral and philosophical questions.
There are many interesting excerpts. For example:
The author is associated with the Foundational Research Institute, which has a variety of interests highly connected to those of Lesswrong, yet some casual searches seem to show they've not been mentioned.
Briefly, they seem to be focused on averting suffering, with various outlooks on that including effective altruism outreach, animal suffering and ai-risk as a cause of great suffering.