Eliezer_Yudkowsky comments on Ethics Notes - Less Wrong

12 Post author: Eliezer_Yudkowsky 21 October 2008 09:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 22 October 2008 11:55:52PM 0 points [-]

Pearson, interpreting a certain 1 as meaning "I remember the blue light coming on" only makes sense if the bit is set to 1 iff a blue light is seen to come on. In essence, what you're describing could be viewed as an encoding fault as much as a false memory - like taking an invariant disc but changing the file format used to interpret it.

For a self-modifying AI, at least, futzing with memories in this way would destroy reflective consistency - the memory would be interpreted by the decision system in a way that didn't reflect the causal chain leading up to it; the read format wouldn't equal the write format. Not a trivial issue.

It also doesn't follow necessarily that if you can have a true certain memory or a well-calibrated probabilistic memory without harm, you can have a false certain memory in the same place. Consider the difference between knowing the true value of tomorrow's DJIA, being uncertain about the tomorrow's DJIA, and having a false confident belief about tomorrow's DJIA.

With that said, it would be a strange and fragile mind that could be destroyed by one false belief within it - but I think the issue is a tad more fraught than you are making it out to be.