XiXiDu comments on The Cognitive Science of Rationality - Less Wrong

88 Post author: lukeprog 12 September 2011 08:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (102)

You are viewing a single comment's thread. Show more comments above.

Comment author: nshepperd 12 September 2011 10:29:58AM *  6 points [-]

Ahh, don't say "understanding" when you mean "containing a simulation"!

It's true that a computer capable of storing n bits can't contain within it a complete description of an arbitrary n-bit computer. But that's not fundamentally different to being unable to store of a description of the 3^^^3 ×n-bit world out there (the territory will generally always be bigger than the map); and of course you don't have to have a miniature bit-for-bit copy of the territory in your head to have a useful understanding of it, and the same goes for self-knowledge.

(Of course, regardless of all that, we have quines anyway, but they've already been mentioned.)

Comment author: XiXiDu 12 September 2011 12:16:58PM 0 points [-]

Ahh, don't say "understanding" when you mean "containing a simulation"!

Could you elaborate on the difference between "understanding" and "simulating", how are you going to get around logical uncertainty?

Comment author: nshepperd 12 September 2011 01:15:44PM 3 points [-]

The naive concept of understanding includes everything we've already learned from cognitive psychology, and other sciences of the brain. Knowing, for example, that the brain runs on neurons with certain activation functions is useful even if you don't know the specific activation states of all the neurons in your brain, as is a high-level algorithmic description of how our thought processes work.

This counts as part of the map that reflects the world "inside our heads", and it is certainly worth refining.

In the context of a computer program or AI such "understanding" would include the AI inspecting its own hardware and its own source code, whether by reading it from the disk or esoteric quining tricks. An intelligent AI could make useful inferences from the content of the code itself -- without having to actually run it, which is what would constitute "simulation" and run into all the paradoxes with having not enough memory to contain a running version of itself.

"Understanding" is then usually partial, but still very useful. "Simulating" is precise and essentially complete, but usually computationally intractable (and occasionally impossible) so we rarely try to do that. You can't get around logical certainty, but that just means you'll sometimes have to live with incomplete knowledge, and it's not as if we weren't resigned to that anyway.