Stuart_Armstrong comments on An attempt to dissolve subjective expectation and personal identity - LessWrong

35 Post author: Kaj_Sotala 22 February 2013 08:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (68)

You are viewing a single comment's thread.

Comment author: Stuart_Armstrong 13 March 2013 04:59:29PM *  0 points [-]

Interesting...

Why would abstract reasoning end up reaching incorrect results more easily? Because it's a recent, underdeveloped evolution, or because of something more fundamental?

Comment author: Kaj_Sotala 15 March 2013 01:42:00PM *  1 point [-]

Good question!

At least three possible explanations come to mind:

  1. Abstract reasoning, by its very nature, is capable of reasoning in any domain and using entirely novel concepts. That limits the amount of domain-specific sanity checks that can be "built in".
  2. Reasoning may not have evolved for problem-solving in the first place, but for communication, limiting the extent to which it's been selected for its problem-solving capability. (This would also help explain why it doesn't drive our behavior that strongly.)
  3. Although human reasoning obviously doesn't run on predicate logic nor any other formal logic, some aspects of it seem to be similar. We know that formal logics are brittle in the sense that in order to get the correct conclusions, you need to get every single premise and inference step correct. People can easily carry out reasoning which is formally entirely correct, but still inapplicable to the real world because they didn't remember to incorporate some relevant factor. (Our working memory limits and tendency to simplify things into nice stories that fit our memory easily probably both makes this worse, and is necessary for us to be able to use abstract reasoning in the first place.) Connectionist-style reasoning seems better capable of modeling things without needing to explicitly specify every single thing that influences it, which is (IIRC) a big part of why connectionists have criticized logic-based AI systems as hopelessly fragile and incapable of reliable real-world performance.