TheAncientGeek comments on Confused as to usefulness of 'consciousness' as a concept - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (229)
I am suspicious of this normative sense of 'consciousness'. I think it's basically a mistake of false reduction to suppose that moral worth is monotonic increasing in descriptive-sense-of-the-word-consciousness. This monotonicity seems to be a premise upon which this normative sense of the word 'consciousness' is based. In fact, even the metapremise that 'moral worth' is a thing seems like a fake reduction. On a high level, the idea of consciousness as a measure of moral worth looks really really strongly like a fake utility function.
A specific example: A superintelligent (super?)conscious paperclip maximizer is five light-minutes away from Earth. Omega has given you a button that you can press which will instantly destroy the paperclip maximizer. If you do not press it within five minutes, then the paperclip maximizer shall paperclip Earth.
I would destroy the paperclip maximizer without any remorse. Just like I would destroy Skynet without remorse. (Terminator: Salvation Skynet at least seems to be not only smart but also have developed feelings so is probably conscious.)
I could go on about why consciousness as moral worth (or even the idea of moral worth in the first place) seems massively confused, but I intend to do that eventually as a post or Sequence (Why I Am Not An Ethical Vegetarian), so shall hold off for now on the assumption you get my general point.
I don't follow your example.
Are you taking the Clippie to be conscious?
Are you taking the Clippie s consciousness to imply a deontological rule not to destroy it?
Are you talking the Clippie level of consciousness to be so huge it implies a utilitarian weighting in its favour?
The comment to which you're replying can be seen as providing a counterexample to the principle that goodness or utility is monotonic increasing in consciousness or conscious beings. Also a refutation of, as you mention, any deontological rule that might forbid destroying it.
The counterexample I'm proposing is that one should destroy a paperclip maximiser, even if it's conscious, even though doing so will reduce the sum total of consciousness; goodness is outright increased by destroying it. (This holds even if we don't suppose that the paperclipper is more conscious than a human; we need only for it to be at all conscious.)
(I suspect that some people who worry about utility monsters might just claim they really would lay down and die. Such a response feels like it would be circular, but I couldn't immediately rigorously pin down why it would.)
I am asking HOW it is a countrexample. As far as I can see, you would have to make an assumption about how .consciousness relates to morality specifically, as in my second and third questions.
For instance,suppose conscious beings are morally relevant just means don't kill conscious beings without good reason..