You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheOtherDave comments on The AI in Mary's room - Less Wrong Discussion

4 Post author: Stuart_Armstrong 24 May 2016 01:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: ImNotAsSmartAsIThinK 29 May 2016 07:28:03PM *  0 points [-]

Mary's room seems to be arguing that,

[experiencing(red)] =/= [experiencing(understanding([experiencing(red)] )] )]

(translation: the experience of seeing red is no the experience of understanding how seeing red works)

This is true, when we take those statements literally. But it's true in the same sense a Gödel encoding of statement in PA is not literally that statement. It is just a representation, but the representation is exactly homomorphic to its referent. Mary's representation of reality is presumed complete ex hypothesi, therefore she will understand exactly what will happen in her brain after seeing color, and that is exactly what happens.

You wouldn't call a statement of PA that isn't a literally a Gödel encoding of a statement (for some fixed encoding) a non-mathematical statement. For one, because that statement has a Gödel encoding by necessity. But more importantly, even though the statement technically isn't literally a Gödel-encoding, it's still mathematical, regardless.

Mary's know how she will respond to learning what red is like. Mary knows how others will respond. This exhausts the space of possible predictions that could be made on behalf of this subjective knowledge, and it can be done without it.

what Mary doesnt know must be subjective, if there is something Mary doesn't know. So the eventual point s that there s more to knowledge than objective knowledge.

Tangentially to this discussion, but I don't think that is a wise way of labeling that knowledge.

Suppose Mary has enough information to predict her own behavior. Suppose she predicts she will do x. Could she not, upon deducing that fact, decide to not do x?

Mary has all objective knowledge, but certain facts about her own future behavior must escape her, because any certainty could trivially be negated.

Comment author: TheOtherDave 30 May 2016 03:48:20AM 0 points [-]

Suppose Mary has enough information to predict her own behavior. Suppose she predicts she will do x. Could she not, upon deducing that fact, decide to not do x?

There are three possibilities worth disambiguating here.
1) Mary predicts that she will do X given some assumed set S1 of knowledge, memories, experiences, etc., AND S1 includes Mary's knowledge of this prediction.
2) Mary predicts that she will do X given some assumed set S2 of knowledge, memories, experiences, etc., AND S2 does not include Mary's knowledge of this prediction.
3) Mary predicts that she will do X independent of her knowledge, memories, experiences, etc.