Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Ben_Jones comments on That Alien Message - Less Wrong

111 Post author: Eliezer_Yudkowsky 22 May 2008 05:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Ben_Jones 23 May 2008 01:35:00PM 10 points [-]

presumably given a sufficiently advanced cognitive science, we could look at its inner workings and say whether it's conscious.

Can we please stop discussing consciousness as though it's some sort of binary option? As though passing a Turing test somehow imbues a system with some magical quality that changes everything?

An AI won't suddenly go 'ping' and become self-aware, any more than a baby suddenly becomes a self-aware entity on its second birthday. Deciding whether or not boxing an AI is slavery is akin to discussions on animal rights, in that it deals with the slippery, quantitative question of how much moral weight we give to 'consciousness'. It's definitely not a yes/no question, and we shouldn't treat it as such.

Comment author: pnrjulius 09 April 2012 05:06:26AM 0 points [-]

I think that's right.

Yet, two things:

  1. It's very hard for me to imagine half a quale. Perhaps this is a failure of imagination?

  2. How do we detect even quantitative levels of consciousness? Surely it's not enough to just have processing power; you must actually be doing the right sort of thing (computations, behaviors, chemical reactions, something). But then... are our computers conscious, even a little bit? If so, does this change our moral relationship to them? If not, how do we know that?