comment in a previous thread with similar topic
I heard on the televisionthat some blind click had developed the skill of seeing with clicks. It sounded cool and worth the effort so I trained myself to have that abiilty too. Yes, it was fun as predicted but not totally mindblowing. No, it is not impossible.
Small babies have eyes that recieve light but they can't see because they can't process the information sufficently. For hearing people retain this property after being 3 days old. There is no natural incentive to be particulaltry picky about hearing, you can be a human just fine without being a echolocator (with just stereo hearing) and not even know you are missing anything. (Humans are seers, not sniffers like dogs or hearers like bats. Humans are also trichronmats while the average animal is a quadrochromat and yes still a human that doesn' feel like missing out on anything because you don't know you are the handicapped minority)
The argument is like because people are naturally illiterate they can't possibly imagine what it would be to see instead of hear words. If you don't go outside of the experience of a medieval peasant that might hold. If you are given a text ina foreign alphabeth and later given the alphabeth and asked to point out the letters you saw you might not be able to complete the task. That you have this property doesn't mean it can't be changed with training. Your ability to see letters will be improved if you work on your literacy. People as able to work for a more efficient sensory processing. And this includes also high end stuff such as synesthetic ability to use spatial metaphors for amounts. There are people whos processes of identifying a letter/number give it a color associaton. It's not that the information would be in the wrong format for the brain to accept it. It is that the brain is in a insufficiently expressive format yet to represent the stimuli. But it is more of the duty of the brain to change rather than the incomprehensibility of the object. Map and territority etc.
People that argue that imagination can't encompass that have not seriosly tried. And even if they have seriosly tried that is more evidence for their lower than average imagination capability than the truth of their argument. "What it is to be a human" isn't even nearly so standard that it can be referenced as single monolithic concept much less a axiom that doesn't need to be stated.
So claiming logical impossibility is hasty in the greatest measure available.
In response to the classic Mysterious Answers to Mysterious Questions, I express some skepticism that consciousness is can be understood by science. I postulate (with low confidence) that consciousness is “inherently mysterious”, in that it is philosophically and scientifically impenetrable. The mysteriousness is a fact about our state of mind, but that state of mind is due to a fundamental epistemic feature of consciousness and is impossible to resolve.
My issue with understanding the cause of consciousness involves p-zombies. Any experiment with the goal of understanding consciousness would have to be able to detect consciousness, which seems to me to be philosophically impossible. To be more specific, any scientific investigation of the cause of consciousness would have (to simplify) an independent variable that we could manipulate to see how this manipulation affects the dependent variable, the presence or absence of consciousness. We assume that those around us are conscious, and we have good reason to do so, but we can't rely on that assumption in any experiment in which we are investigating consciousness. Before we ask “what is causing x?”, we first have to know that x is present.
As Eliezer points out, that an individual says he's conscious is a pretty good signal of consciousness, but we can't necessarily rely on that signal for non-human minds. A conscious AI may never talk about its internal states depending on its structure. (Humans have a survival advantage to social sharing of internal realities; an AI will not be subject to that selection pressure. There’s no reason for it to have any sort of emotional need to share its feelings, for example.) On the flip side, a savvy but non-conscious AI may talk about it's "internal states", not because it actually has internal states, but because it is “guessing the teacher's password” in the strongest way imaginable: it has no understanding whatsoever of what those states are, but computes that aping internal states will accomplish it's goals. I don't know how we could possibly know if the AI is aping consciousness for it own ends or if it actually is conscious. If consciousness is thus undetectable, I can't see how science can investigate it.
That said, I am very well aware that “Throughout history, every mystery, ever solved has turned out to be not magic*” and that every single time something has seemed inscrutable to science, a reductionist explanation eventually surfaced. Knowing this, I seriously downgrade my confidence that "No, really, this time it is different. This phenomenon really is beyond the grasp of science." I look forward to someone coming forward with something clever that dissolves the question, but even so, it does seem inscrutable.
*- Though, to be fair, this is a selection bias. Of course, all the solved mysteries weren't magic. All the mysteries that are acctully magic remain unsolved, because they're magic! This is NOT to say I believe in magic, just to say that it's hardly saying much to claim that all the things we've come to understand were in principle understandable. To steelman: I do understand that with each mystery that was once declared to be magical, then later shown not to be, our collective priors for the existence of magical things decrease. (There is a sort of halting problem: if a question has remained unsolved since the dawn of asking questions, is that because it is unsolvable, or because we're right around the corner form solving it?)