djm comments on The AI in Mary's room - Less Wrong

4 Post author: Stuart_Armstrong 24 May 2016 01:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread.

Comment author: djm 25 May 2016 03:28:30AM 0 points [-]

Interesting thought experiment. Do we know an AI would enter a different mental state though?

I am finding it difficult to imagine the difference between software "knowing all about" and "seeing red"

Comment author: Stuart_Armstrong 25 May 2016 12:12:09PM 1 point [-]

Do we know an AI would enter a different mental state though?

We could program it that way.

Comment author: ImNotAsSmartAsIThinK 28 May 2016 03:51:17PM 0 points [-]

Arguably it could simulate itself seeing red and replace itself with the simulation.

I think the distinction between 'knowing all about' and 'seeing' red is captured in my box analogy. The brain state is a box. There is another box inside it, call this 'understanding'. We call something inside the first box 'experienced'. So the paradox hear is the two distinct states [experiencing (red) ] and [experiencing ( [understanding (red) ] ) ] are both brought under the header [knowing (red)], and this is really confusing.