All of Josh Gellers's Comments + Replies

Thanks for taking the time to look through my book. It's an important first step to having a fair dialogue about tricky issues. I'll say from the outset that I initially sought to answer two questions in my book- (1) could robots have rights (I showed that this could easily be the case in terms of legal rights, which is already happening in the US in the form of pedestrian rights for personal delivery devices); and (2) should robots have rights (here I also answered in the affirmative by taking a broad view of the insights provided by the Anthropocene, New... (read more)

I'm curious what you would think about my actual book, not just the review of it! As a political scientist who has spent a decade working on environmental rights, I come at the issue of robot rights from arguably a more interdisciplinary perspective.  You can download the book for free here: https://www.amazon.com/Rights-Robots-Artificial-Intelligence-Environmental-ebook-dp-B08MVB9K28/dp/B08MVB9K28/ref=mt_other?_encoding=UTF8&me=&qid=

2Charlie Steiner
Thanks for the link! I think the earlier chapters that I skimmed were actually more interesting to me than chapter 5 that I... well, skimmed in more detail. I'll make some comments anyway, and if I'm wrong it's probably my own fault for not being sufficiently acholarly. Some issues general issues with the entire "robot rights" genre (justifications for me not including more auch papers in this poast), which I don't think you evaded: * Rights-based reasoning isn't very useful for questions like what entities to create in the first place. * AI capabilities are not going to reach the level of useful personal assistants and then plateau. They're going to keep growing. The useful notion of rights relies on the usefulness of certain legal and social categories, but sufficiently cabable AI might be able to get what it wants in ways that undermine those categories (in the extreme case, without acting as a relevant member of society or relevant subject of the legal system). * Even in the near term, for those interested in mental properties as a basis for how we should treat AIs, the literature is too anthropomorphic, and reality (e.g. "what's it like to be GPT-3") is very, very not anthropomorphic. I would say your book is above average here because it focuses on social / legal reasons for rights.

Thanks for this thorough post. What you have described is known as the “properties-based” approach to moral status. In addition to sentience, others have argued that it’s intelligence, rationality, consciousness, and other traits that need to be present in order for an entity to be worthy of moral concern. But as I have argued in my 2020 book, Rights for Robots: Artificial Intelligence, Animal and Environmental Law (Routledge), this is a Sisyphean task. Philosophers don’t (and may never) agree about which of these properties is necessary. We need a differe... (read more)