You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Houshalter comments on Open thread, Aug. 10 - Aug. 16, 2015 - Less Wrong Discussion

5 Post author: MrMind 10 August 2015 07:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (283)

You are viewing a single comment's thread.

Comment author: Houshalter 10 August 2015 01:12:25PM 11 points [-]

Do Artificial Reinforcement-Learning Agents Matter Morally?

I've read this paper and find it fascinating. I think it's very relevant to Lesswrong's interests. Not just that it's about AI,but also that it asks hard moral and philosophical questions.

There are many interesting excerpts. For example:

The drug midazolam (also known as ‘versed,’ short for ‘versatile sedative’) is often used in procedures like endoscopy and colonoscopy... surveyed doctors in Germany who indicated that during endoscopies using midazolam, patients would ‘moan aloud because of pain’ and sometimes scream. Most of the endoscopists reported ‘fierce defense movements with midazolam or the need to hold the patient down on the examination couch.’ And yet, because midazolam blocks memory formation, most patients didn’t remember this: ‘the potent amnestic effect of midazolam conceals pain actually suffered during the endoscopic procedure’. While midazolam does prevent the hippocampus from forming memories, the patient remains conscious, and dopaminergic reinforcement-learning continues to function as normal.

Comment author: Betawolf 10 August 2015 09:41:29PM 4 points [-]

The author is associated with the Foundational Research Institute, which has a variety of interests highly connected to those of Lesswrong, yet some casual searches seem to show they've not been mentioned.

Briefly, they seem to be focused on averting suffering, with various outlooks on that including effective altruism outreach, animal suffering and ai-risk as a cause of great suffering.