You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

RichardKennaway comments on Open Thread August 31 - September 6 - Less Wrong Discussion

5 Post author: Elo 30 August 2015 09:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (326)

You are viewing a single comment's thread.

Comment author: RichardKennaway 04 September 2015 08:04:12PM 3 points [-]

"Do Artificial Reinforcement-Learning Agents Matter Morally?" Yes, says Brian Tomasik, even present-day ones (by a very small but nonzero amount). He foresees their ethical significance increasing in the near future, and he isn't talking about strong AI, but an increase in the ordinary applications of reinforcement learning to our technology.

The argument is, briefly: for various claims about what consciousness physically is, RL programs display these features to some extent as well. Therefore they have a nonzero degree of consciousness, and so a nonzero degree of moral standing. Enough that we should be thinking now about guidelines for the ethical creation of such software.

He suggests that, paralleling guidelines for the use of animals in research, RL algorithms should be replaced by others whenever possible, or if they must be used, reduced in number, and driven through rewards, not punishments.

He considers the idea of an organisation of People for the Ethical Treatment of Reinforcement Learners, and the embedding of RL algorithms in humanoid bodies and videogame characters as ways of persuading the public to the idea that they have moral significance.

Comment author: Manfred 05 September 2015 06:05:29AM 0 points [-]

driven through rewards, not punishments.

I would be much more morally concerned about reinforcement learning agents if this were a functional distinction.

Comment author: RichardKennaway 05 September 2015 08:26:16AM 0 points [-]

I would be much more morally concerned about reinforcement learning agents if this were a functional distinction.

He discusses that point in the paper.