If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
What literature is available on who will be given moral consideration in a superintelligence's coherent extrapolated volition (CEV), and how much weight each agent will be given?
Nick Bostrom's Superintelligence mentions that it is an open problem as to whether AIs, non-human animals, currently deceased people, etc should be given moral consideration, and whether the values of those who aid in creating the superintelligence should be given more weight than that of others. However, Bostrom does not actually answer these questions, other than slightly advocating everyone being given equal weight in the CEV. The abstracts of other papers on CEV don't mention this topic, so I am doubtful on the usefulness of reading their entireties.