You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

V_V comments on The AI in Mary's room - Less Wrong Discussion

4 Post author: Stuart_Armstrong 24 May 2016 01:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 28 May 2016 07:02:46PM 0 points [-]

Consider a situation where Mary is so dexterous that she is able to perform fine-grained brain surgery on herself. In that case, she could look at what an example of a brain that has seen red looks like, and manually copy any relevant differences into her own brain. In that case, while she still never would have actually seen red through her eyes, it seems like she would know what it is like to see red as well as anyone else.

But in order to create a realistic experience she would have to create a false memory of having seen red, which is something that an agent (human or AI) that values epistemic rationality would not want to do.

Comment author: ShardPhoenix 29 May 2016 01:33:55AM 0 points [-]

Since you'd know it was a false memory, it doesn't necessarily seem to be a problem, at least if you really need to know what red is like for some reason.

Comment author: V_V 31 May 2016 05:02:55PM *  0 points [-]

If you know that it is a false memory then the experience is not completely accurate, though it may be perhaps more accurate than what human imagination could produce.