Related to: Which Parts Are "Me"?, Making your explicit reasoning trustworthy, The 5-Second Level.

What's damaging about moralizing that we wish to avoid, what useful purpose does moralizing usually serve, and what allows to avoid the damage while retaining the usefulness? It engages psychological adaptations that promote conflict (by playing on social status), which are unpleasant to experience and can lead to undesirable consequences in the long run (such as feeling systematically uncomfortable interacting with a person, and so not being able to live or work or be friends with them). It serves the purpose of imprinting your values, which you feel to be right, on the people you interact with. Consequentialist elucidation of reasons for approving or disapproving of a given policy (virtue) is an effective persuasion technique if your values are actually right (for the people you try to confer them on), and it doesn't engage the same parts of your brain that make moralizing undesirable.

What happens here is transfer of responsibility for important tasks from the imperfect machinery that historically used to manage them (with systematic problems in any given context that humans but not evolution can notice), to explicit reasoning.

Taking advantage of this requires including those tasks in the scope of things that can be reasoned about (instead of ignoring them as not falling into your area of expertise; for example flinching from reasoning about normative questions or intuition as "not scientific", or "not objective"), and developing enough understanding to actually do better than the original heuristics (in some cases by not ignoring what they say), making your explicit reasoning worth trusting.

This calls for identifying other examples of problematic modes of reasoning that engage crude psychological adaptations, and developing techniques for doing better (and making sure they are actually better before trusting them). These examples come to mind: rational argument (don't use as arguments things that you expect other person disagrees with, seek a path where every step will be accepted), allocation of responsibility (don't leave it to unvoiced tendencies to do things, discuss effort and motivation explicitly), development of emotional associations with a given situation/person/thought (take it in your own hands, explicitly train your emotion to be what you prefer it to be, to the extent possible), learning of facts (don't rely on the stupid memory mechanisms which don't understand commands like "this is really important, remember it", use spaced repetition systems).

And the list goes on. What other cognitive tools can significantly benefit from transferring them to explicit reasoning? Should there be a list of problems and solutions? Which unsolved problems on such a list are particularly worth working on? Which problems with known solutions should be fixed (in any given person) as soon as possible? How do we better facilitate training?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 11:28 PM

Consequentialist elucidation of reasons for approving or disapproving of a given policy (virtue) is an effective persuasion technique if your values are actually right (for the people you try to confer them on), and it doesn't engage the same parts of your brain that make moralizing undesirable.

This does not match my observations.

More generally I find that I do not trust other people's explicit reasoning more than I trust their other forms of intelligence. For example I would never use this description:

What happens here is transfer of responsibility for important tasks from the imperfect machinery that historically used to manage them (with systematic problems in any given context that humans but not evolution can notice), to explicit reasoning.

We aren't moving away from imperfect machinery here. We're just moving to a different part. And a part that some suggest exist primarily for the purpose of constructing bullshit.

There is a potential for improving our moral judgement via explicit reasoning but that improvement isn't something I would expect from most people who make the shift - where by 'expect' I am mostly just talking about how I have perceived the behaviour of intelligent people when engaging in explicit moral reasoning. It takes a lot of training before you can even catch up with your 'default' (some of which Vladimir alluded to).

There is a potential for improving our moral judgement via explicit reasoning but that improvement isn't something I would expect from most people who make the shift

Hence the importance of making sure your new mode of reasoning is trustworthy before shifting the load to it, and continuing to pay attention to what the older modes of reasoning tell you even if you no longer obey them blindly. And the difficulty of doing this on your own calls for institutional tools, such as textbooks, training programs, or community groups.

In my previous comment, I've been concerned with contrasting the function of moralization, as is stressed here, and the mechanism of moralization, which is ingrained very deeply to the effect that without enough praise children develop dysfunctionally, etc.

More generally I find that I do not trust other people's explicit reasoning more than I trust their other forms of intelligence.

The big problem is that explicit reasoning is at least as often used for rationalizing pre-existing beliefs as for developing correct (or at least, more correct) beliefs. This is also something to watch for carefully in your own thinking. Beware when your explicit reasoning tells you something you want to hear.

And the list goes on. What other cognitive tools can significantly benefit from transferring them to explicit reasoning?

And the part that is most interesting is when we take the task through explicit reasoning then back out to the unconscious. Because working memory just plain sucks.

Yes, explicitly developed skills are at their best when practice makes them automatic and not requiring conscious attention. But they would be different processes from the original ones, and they can be better. This is also a major topic of Eliezer's last post (which prompted me to write this one).

It certainly overlaps.

Yep, I think this is a key rationality meta-skill. See this great article, which I think would make a killer newbie introduction to Less Wrong, for more; you can save some time by starting your reading at the point where the article says "Time for a pop quiz".

In this post, I wanted to emphasize mostly things other than biases. We all know about biases, but there are other opportunities for improvement, potentially in every mode of human activity.