The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
"So you shouldn't take it too badly that your comment didn't catch fire."
I'm not mad, but... Just see it from my point of view. An interesting thought doesn't come to guys like me every day. ;)
"But the quail doesn't seem to be anywhere in the code, so where does it come from?"
I think it's in the code. When I try to imagine a mind that has no qualia, I imagine something quite unlike myself.
What would it actually be like for us to not have qualia? It could mean that I would look at a red object and think, "object, rectangular, apparent area 1 degree by 0.5 degrees, long side vertical, top left at (100, 78), color 0xff0000". That would be the case where the algorithm has no inside, so it doesn't need to feel like anything from the inside. Nothing about our thoughts would be "ineffable". (Although it would be insulting to call a being unconscious or, worse, "not self aware" for knowing itself better than we do... Hmm. I guess qualia and consciousness are separate after all. Or is it? But I'm dealing with qualia right now.)
Or, the nerve could send its impulse directly into a muscle, like in jellyfish. That would mean that the hole in my knowledge is so big that the quail for "touch" falls through it.
In my mind, touch leaves a memory, and I then try to look at this memory. I ask my brain, "what does touch feel like?", and I get back, "Error: can't decompile native method. But I can tell you definitely what it doesn't feel like: greenness." So what I'm saying is, I can't observe what the feeling of touch is made of, but it has enough bits to not confuse it with green.
It makes me [feel] unconfused. Although it might be confusing.
"Just to be clear, I think that those questions arise out of a wrong approach to consciousness."
What's your approach?
I don't understand your explanation. You are apparently saying that quale (you seem to deliberately misspell the word, why?) is how the algorithm feels from inside. Well, I agree, but in the same time I think that "quale" is only a philosopher's noble word for "feel from inside". The explanation looks like a truism.
I have always been (and still am) confused by questions like: How other people perceive colors? Do they feel it the same way as I do? Are there people who see the colors inverted, having the equivalent of my feeling of "... (read more)