ShardPhoenix comments on The AI in Mary's room - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (58)
Consider a situation where Mary is so dexterous that she is able to perform fine-grained brain surgery on herself. In that case, she could look at what an example of a brain that has seen red looks like, and manually copy any relevant differences into her own brain. In that case, while she still never would have actually seen red through her eyes, it seems like she would know what it is like to see red as well as anyone else.
I think this demonstrates that the Mary's room though experiment is about the limitations of human senses/means of learning, and that the apparent sense of mystery it has comes mainly from the vagueness of what it means to "know all about" something. (Not saying it was a useless idea - it can be quite valuable to be forced to break down some vague or ambiguous idea that we usually take for granted).
M's R is about what it says its about, the existence of non physical facts. Finding a loophole where Mary can instantiate the brain state without having the perceptual stimulus doesn't address that...indeed it assumes that an instantiation of the red-seeing is necessary, which is tantamount to conceding that something subectve is going on, which is tantamount to conceding the point.
What is a "non physical fact"? The experience of red seems to be physically encoded in the brain like anything else. It does seem clear that some knowledge exists which can't be transmitted from human to human via means of language, at least not in the same way that 2+2=4 can. However, this is just a limitation of the human design that doesn't necessarily apply to eg AIs (which depending on design may be able to transmit and integrate snippets of their internal code and data), and I don't think this thought experiment proves anything beyond that.
The argument treats physical knowledge as a subset of objective. kowledge. Subjective knowledge, which can only be known on a first person basis, automatically counts as non physical. That's an epistemic definition.
If you have the expected intuition from M's R, that Mary would be able to read cognitive information from brain scans, but not expetuental information. In that send, 'red' is not encoded in the same way as everything else, since it can not be decoded in the same way.
But noit super human design. The original paper (ave you read it?) avoids the issue of limited communication bandwidth by making Mary a super scientist who can examine brain scans of any level of detail.
What it proves to you depends on what intuitions you have about it . If you think Mary would know what red looks like while in the room, from reading brain scans, then it s going to prove anything to you.
A way to rephrase the question is, "is there any sequence of sensory inputs other than the stimulation of red cones by red light that will cause Mary to have comparable memories re: the color red as someone who has had their red cones stimulated at some point". It's possible that the answer is no, which says something interesting about the API of the human machine, but doesn't seem necessarily fundamental to the concept of knowledge.
The relevance is physicalism.
If physicalism is the claim that everything, has a physical explanation, then the inability to understand what pain is without being in pain is a contradiction to it. I don' think anyone here believes that physicalism is an unmportamt issue.
I'm arguing that there's no contradiction and that this inability is just a limit of humans/organic brains, not a fundamental fact about pain or information.
If you want to argue to that conclusion, then arge for it: What kind of limit? Where does it come from?