I do not think this follows, the "consensus" is that sentience is sufficient for moral status. It is not clearly the case that giving some moral consideration to non-human sentient beings would lead to the scenario you describe. Though see: https://www.tandfonline.com/doi/full/10.1080/21550085.2023.2200724
These are great points, thank you!
Remember that what the SCEV does is not directly that which the individuals included in it directly want, but what they would want after an extrapolation/reflection process that converged in the most coherent way possible. This means that almost certainly, the result is not the same as if there were no extrapolation process. If there were no extrapolation process, one real possibility is that something like what you suggest, such as sentient dust mites or ants taking over the utility function would indeed occur. But ...
What I mean by "moral philosophy literature" is the contemporary moral philosophy literature, I should have been more specific, my bad. And in contemporary philosophy, it is universally accepted (though of course, the might exist one philosopher or another who disagrees) that sentience in the sense understood above as the capacity of having positively or negatively valenced phenomenally conscious experiences is sufficient for moral patienthood. If this is the case, then, it is enough to cite a published work or works in which this is evident. This is why I...
Thank you! I will for sure read these when I have time. And thank you for your comments!
Regarding how to take into account the interests of insects and other animals/digital minds see this passage I have to exclude form publication: [SCEV would apply an equal consideration of interests principle] "However, this does not entail that, for instance, if there is a non-negligible chance that dust mites or future large language models are sentient, the strength of their interests should be weighted the same as the strength of the interests of entities that we have good reasons to believe that it is very likely that they are sentient. The degree of ...
I am arguing that given that
1. (non-human animals deserve moral consideration, and s-risk are bad (I assume this))
We have reasons to believe 2: (we have some pro-tanto reasons to include them in the process of value learning of an artificial superintelligence instead of only including humans).
There are people (whose objections I address in the paper) that accept 1 but do not accept 2. 1 is not justified for the same reasons as 2. 2 is justified for the reasons I present in the paper. 1 is justified by other arguments about animal ethics and the...
Hi Roger, first, the paper is addressed to those who already do believe that all sentient beings deserve moral consideration and that their suffering is morally undesirable. I do not argue for these points in the paper, since they are already universally accepted in the moral philosophy literature.
This is why, for instance, write the following: "sentience in the sense understood above as the capacity of having positively or negatively valenced phenomenally conscious experiences is widely regarded and accepted as a sufficient condition for moral patienthood...
Yes, and - other points may also be relevant:
(1) Whether there are possible scenarios like these in which the ASI cannot find a way to adequately satisfy all the extrapolated volition of the included beings is not clear. There might not be any such scenarios.
(2) If these scenarios are possible, it is also not clear how likely they are.
(3) There is a subset of s-risks and undesirable outcomes (those coming from cooperation failures between powerful agents) that are a problem to all ambitious value-alignment proposals, including CEV and SCEV.
(4) In part, bec...
unlike for other humans, we don't have an instrumental reason to include them in the programmed value calculation, and to precommit to doing so, etc. For animals, it's more of a terminal goal.
First, it seems plausible that, we (in fact) do not have instrumental reason to include all humans. As I argue in section 4.2. There are some humans such as: " children, existing people who've never heard about AI or people with severe physical or cognitive disabilities unable to act on and express their own views on the topic" who, if included, would also only ...
Okay, I understand better now.
You ask: "Where does your belief regarding the badness of s-risks come from?"
And you provide 3 possible answers I am (in your view) able to choose between:
However, the first two answers do not seem to be answers to the question. My beliefs about what is or is not morally desirable do not come from "what most people value" or "what I personally value but o...
It is not clear to me exactly what "belief regarding suffering" you are talking about, what you mean by "ordinary human values"/"your own personal unique values".
As I argue in Section 2.2., there is (at least) a non-negligible chance that s-risks occur as a result of implementing human-CEV, even if s-risks are very morally undesirable (either in a realist or non-realist sense).
Please read the paper, and if you have any specific points of disagreement cite the passages you would like to discuss. Thank you
Hi simon,
it is not clear to me which of the points of the paper you object to exactly, and I feel some of your worries may already be addressed in the paper.
For instance, you write: "And that's relevant because they are actually existing entities we are working together with on this one planet." First, some sentient non-humans already exist, that is, non-human animals. Second, the fact that we can work or not work with given entities does not seem to be what is relevant in determining whether they should be included in the extrapolation b...
I am glad to hear you enjoyed the paper and that our conversation has inspired you to work more on this issue! As I mentioned I now find the worries you lay out in the first paragraph significantly more pressing, thank you for pointing them out!