eli_sennesh comments on Why Don't Rationalists Win? - Less Wrong

6 Post author: adamzerner 05 September 2015 12:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 08 September 2015 12:41:04PM *  1 point [-]

No, it's better to assume that folk-psychology doesn't accurately map the mind. "Reversed stupidity is not intelligence."

Your statement is equivalent to saying, "We've seen a beautiful sunset. Clearly, it must be a sign of God's happiness, since it couldn't be a sign of God's anger." In actual fact, it's all a matter of the atmosphere refracting light from a giant nuclear-fusion reaction, and made-up deities have nothing to do with it.

Just because a map seems to let you classify things, doesn't mean it provides accurate causal explanations.

Comment author: TheAncientGeek 12 September 2015 06:49:35AM *  0 points [-]

If we don't know enough about how the mind works to say it is good at compermentalisation, we also don't know enough to say it is bad at compartmentalisation

Your position requires you to be noncommittal about a lot of things. Maybe you are managing that.

The analogy with sunsets isn't analogous, because we have the science as an alternative

Comment author: Jiro 12 September 2015 04:07:11PM *  0 points [-]

I wouldn't be able to tell if someone is a good mathematician, but I'd know that if they add 2 and 2 the normal way and get 5, they're a bad one. It's often a lot easier to detect incompetence, or at least some kinds of incompetence, than excellence.

Comment author: TheAncientGeek 12 September 2015 05:51:31PM *  0 points [-]

Is compartmentalisation supposed to be a competence or an incompetence, or neither?

Comment author: [deleted] 12 September 2015 08:03:41PM -1 points [-]

Personally, I don't think "compartmentalization" actually cuts reality at the joints. Surely the brain must solve a classification problem at some point, but it could easily "fall out" that your algorithms simply perform better if they classify things or situations between contextualized models - that is, if they "compartmentalize" - than if they try to build one humongous super-model for all possible things and situations.

Comment author: TheAncientGeek 13 September 2015 01:37:50PM 0 points [-]

But you don;t have proof of that theory, do you?

Comment author: [deleted] 14 September 2015 01:48:35PM -1 points [-]

Your original thesis would support that theory, actually.

Comment author: TheAncientGeek 15 September 2015 10:40:33AM 0 points [-]

I havent made any object level claims about psychology.