SilentCal comments on Confused as to usefulness of 'consciousness' as a concept - LessWrong

35 Post author: KnaveOfAllTrades 13 July 2014 11:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: KnaveOfAllTrades 15 July 2014 12:11:51PM *  6 points [-]

I am suspicious of this normative sense of 'consciousness'. I think it's basically a mistake of false reduction to suppose that moral worth is monotonic increasing in descriptive-sense-of-the-word-consciousness. This monotonicity seems to be a premise upon which this normative sense of the word 'consciousness' is based. In fact, even the metapremise that 'moral worth' is a thing seems like a fake reduction. On a high level, the idea of consciousness as a measure of moral worth looks really really strongly like a fake utility function.

A specific example: A superintelligent (super?)conscious paperclip maximizer is five light-minutes away from Earth. Omega has given you a button that you can press which will instantly destroy the paperclip maximizer. If you do not press it within five minutes, then the paperclip maximizer shall paperclip Earth.

I would destroy the paperclip maximizer without any remorse. Just like I would destroy Skynet without remorse. (Terminator: Salvation Skynet at least seems to be not only smart but also have developed feelings so is probably conscious.)

I could go on about why consciousness as moral worth (or even the idea of moral worth in the first place) seems massively confused, but I intend to do that eventually as a post or Sequence (Why I Am Not An Ethical Vegetarian), so shall hold off for now on the assumption you get my general point.

Comment author: SilentCal 15 July 2014 09:09:24PM 2 points [-]

I think I get what you're saying, but I'm not sure I agree. If the paperclip maximizer worked by simulating trillions of human-like agents doing fulfilling intellectual tasks, I'd be very sad to press the button. If I were convinced that pressing the button would result in less agent-eudaimonia-time over the universe's course, I wouldn't press it at all.

...so I'm probably a pretty ideal target audience for your post/sequence. Looking forward to it!

Comment author: KnaveOfAllTrades 19 July 2014 01:07:26AM 2 points [-]

This is nuking the hypothetical. For any action that someone claims to be a good idea, one can specify a world where taking that action causes some terrible outcome.

If the paperclip maximizer worked by simulating trillions of human-like agents doing fulfilling intellectual tasks, I'd be very sad to press the button.

If you would be sad because and only because it were simulating humans (rather than because the paperclipper were conscious), my point goes through.

Looking forward to it!

Ta!