Okay this is much better and different from what I'd thought you'd been saying.
When you say "we" and "minds" you are getting at something and here is my attempt to see if I've understood:
Given an algorithm which models itself (something like a mind; but not so specific, taboo mind) and its environment, that algorithm must recognize the difference between its model of its environment, which is filtered through it's I/O devices of whatever form, and the environment itself.
The model this algorithm has should realize that the set of information contained in the environment may be in a different format from the set of information contained in the model (dualism of a sort) and that its accuracy is optimizing for predictions as opposed to truth.
Is this similar to what you mean?
Given an algorithm which models itself ... Is this similar to what you mean?
No. If it involves self modeling, it is very far from what I am talking about. Give it up. It is just not worth it.
An article at The Edge has scientific experts in various fields give their favorite examples of theories that were wrong in their fields. Most relevantly to Less Wrong, many of those scientists discuss what their disciplines did that was wrong which resulted in the misconceptions. For example, Irene Pepperberg not surprisingly discusses the failure for scientists to appreciate avian intelligence. She emphasizes that this failure resulted from a combination of different factors, including the lack of appreciation that high level cognition could occur without the mammalian cortex, and that many early studies used pigeons which just aren't that bright.