shminux comments on Open Thread for February 11 - 17 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (325)
All this talk of P-zombies. Is there even a hint of a mechanism that anybody can think of to detect if something else is conscious, or to measure their degree of consciousness assuming it admits of degree?
I have spent my life figuring other humans are probably conscious purely on an Occam's razor kind of argument that I am conscious and the most straightforward explanation for my similarities and grouping with all these other people is that they are in relevant respects just like me. But I have always thought that increasingly complex simulations of humans could be both "obviously" not conscious but be mistaken by others as conscious. Does every human on the planet who reaches "voice mail jail," voice text interactive systems, are they all aware that they have not reached a consciousness? Do even those of us who are aware forget sometimes when we are not being careful? Is this going to become even a harder distinction to make as tech continues to get better?
I have been enjoying the television show "almost human." In this show there are androids, most of which have been designed to NOT be too much like humans, although what they are really like is boring rule-following humans. It is clear in this show that the value on an android "life" is a tiny fraction of the value on a "human" life, in the first episode a human cop kills his android partner in order to get another one. The partner he does get is much more like a human, but still considered the property of the police department for which he works, and nobody really has much of a problem with this. Ironically, this "almost human" android partner is African American.
I don't know of a human-independent definition of consciousness, do you? If not, how can one say that "something else is conscious"? So the statement
will only make sense once there is a definition of consciousness not relying on being a human or using one to evaluate it. (I have a couple ideas about that, but they are not firm enough to explicate here.)
I don't know of ANY definition of consciousness which is testable, human-independent or not.
Integrated Information Theory is one attempt at a definition. I read about it a little, but not enough to determine if it is completely crazy.
IIT is provides a mathematical approach to measuring consciousness. It is not crazy, and has a significant number of good papers on the topic. It is human-independent
I don't understand it, but from reading the wikipedia summary it seems to me it measures a complexity of the system. A complexity is not necessarily consciousness.
According to this theory, what is the key difference between a human brain, and... let's say a hard disk of the same capacity, connected to a high-resolution camera? Let's assume that the data from the camera are being written in real time to pseudo-random parts of the hard disk. The pseudo-random parts are chosen by calculating a checksum of the whole hard disk. This system obviously is not conscious, but seems complex enough.
IIT proposes that consciousness is integrated information.
The key difference between a brain and the hard disk is the disk has no way of knowing what it is actually sensing. Brain can tell difference between many more sense and receive and use more forms of information. The camera is not conscious of the fact it sensing light and colour.
This article is a good introduction to the topic and the photodiode example in the paper is the simple version of your question http://www.biolbull.org/content/215/3/216.full
Thanks! The article was good. At this moment, I am... not convinced, but also not able to find an obvious error.