whowhowho comments on State your physical account of experienced color - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (53)
Inablity to imagine it. We know how how virtual geometrical structures --shapes--can be built up in other structures because we can build things that do that -- they're called GPUs, shaders, graphics subroutines and so on. If you can engineer something you understand it. There is a sense in which a computer has its own internal representation of a geometety other than its own phsyical geometery. We don't however know how to give a computer it's own red. It just stores a number which activates an led which activates our own red. We don't know how to write seeRed().
You lost me a little bit. We can write "see these wavelengths in this shape and make them black" (red-eye filters). What makes "seeing" shape different from "seeing" color?
We can give a computer an internal representation of shape, but not of colour as we experience it.
How would it function differently if it did have "an internal representation of color as we experience it"?
That's hard to answer without specifying more about the nature of the AI, but it might say things like "what a beautiful sunset".
I'm not going to say the goalposts are moving, but I definitely don't know where they are any more. I was talking about red-eye filters built into cameras. You seemed to be suggesting that they do have "internal representations" of shape, but not of color, even though they recognize both shape and color in the same way. I'm trying to see what the difference is.
Essentially, why can a computer have an internal representation of shape without saying "wow, what a beautiful building" but an internal representation of color would lead it to say "wow, what a beautiful sunset"?
I don't know why you are talking about filters.
If you think you can write seeRed(), please supply some pseudocode.
What was wrong with this comment?
It doesn't relate to giving an internal system an internal represetnation of colour like ours. If you put the filter on, you don't go from red to black, you go from #FF0000 to #000000, or something.
Okay, so... we can't make computers that go from red to black, and we can't ourselves understand what it's like to go from #FF0000 to #000000, and this means what?
To me it means the things we use to do processing are very different. Say, a whole brain emulation would have our experience of color, and if we get really really good at cognitive surgery, we might be able to extract the minimum necessary bits to contain that experience of color, and bolt it onto a red-eye filter. Why bother, though? What's the relevant difference?