You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Panorama comments on Open Thread August 31 - September 6 - Less Wrong Discussion

5 Post author: Elo 30 August 2015 09:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (326)

You are viewing a single comment's thread.

Comment author: Panorama 05 September 2015 07:22:32PM *  2 points [-]

A Defense of the Rights of Artificial Intelligences by Eric Schwitzgebel and Mara [official surname still be decided]

There are possible artificially intelligent beings who do not differ in any morally relevant respect from human beings. Such possible beings would deserve moral consideration similar to that of human beings. Our duties to them would not be appreciably reduced by the fact that they are non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their existence to us, we would likely have additional moral obligations to them that we don’t ordinarily owe to human strangers – obligations similar to those of parent to child or god to creature. Given our moral obligations to such AIs, two principles for ethical AI design recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear. Since human moral intuition and moral theory evolved and developed in contexts without AI, those intuitions and theories might break down or become destabilized when confronted with the wide range of weird minds that AI design might make possible.

Full version available here.

As always, comments warmly welcomed -- either by email or on this blog post. We're submitting it to a special issue of Midwest Studies with a hard deadline of September 15, so comments before that deadline would be especially useful.