You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

pedanterrific comments on State your physical account of experienced color - Less Wrong Discussion

-1 Post author: Mitchell_Porter 01 February 2012 07:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: pedanterrific 01 February 2013 06:02:18PM *  0 points [-]

I'm not going to say the goalposts are moving, but I definitely don't know where they are any more. I was talking about red-eye filters built into cameras. You seemed to be suggesting that they do have "internal representations" of shape, but not of color, even though they recognize both shape and color in the same way. I'm trying to see what the difference is.

Essentially, why can a computer have an internal representation of shape without saying "wow, what a beautiful building" but an internal representation of color would lead it to say "wow, what a beautiful sunset"?

Comment author: whowhowho 01 February 2013 06:06:30PM *  0 points [-]

I don't know why you are talking about filters.

If you think you can write seeRed(), please supply some pseudocode.

Comment author: pedanterrific 01 February 2013 06:28:22PM 0 points [-]

What was wrong with this comment?

Comment author: whowhowho 01 February 2013 06:40:11PM *  0 points [-]

It doesn't relate to giving an internal system an internal represetnation of colour like ours. If you put the filter on, you don't go from red to black, you go from #FF0000 to #000000, or something.

Comment author: pedanterrific 01 February 2013 07:42:29PM 0 points [-]

Okay, so... we can't make computers that go from red to black, and we can't ourselves understand what it's like to go from #FF0000 to #000000, and this means what?

To me it means the things we use to do processing are very different. Say, a whole brain emulation would have our experience of color, and if we get really really good at cognitive surgery, we might be able to extract the minimum necessary bits to contain that experience of color, and bolt it onto a red-eye filter. Why bother, though? What's the relevant difference?

Comment author: whowhowho 02 February 2013 01:30:40AM 0 points [-]

I don't see how a wodge of bits, in isolation from context, could be said to "contain" anything processing, let alone anything depending on actual physics. It;s hard to see how it could even contain any definite meaning, absent context. What does 100110001011101 mean?

Comment author: pedanterrific 02 February 2013 03:31:58AM 0 points [-]

Sorry- "minimum necessary (pieces of brain)", I meant to say. Like, probably not motor control, or language, or maybe memory.

Comment author: whowhowho 03 February 2013 08:38:41PM 1 point [-]

Say, a whole brain emulation would have our experience of color, and if we get really really good at cognitive surgery, we might be able to extract the minimum necessary bits to contain that experience of color, and bolt it onto a red-eye filter. Why bother, though? What's the relevant difference?

The point of discussing the engineering of colour qualia is that it relates to the level of understanding of how consciousness works. Emulations bypass the need to understand something in order to duplicate it, and so are not relevant to the initial claim that the implementation of (colour) qualia is not understood within current science.