Anna Salamon and I are confused. Both of us notice ourselves rationalizing on pretty much a daily basis and have to apply techniques like the Litany of Tarski pretty regularly. But in several of our test sessions for teaching rationality, a handful of people report never rationalizing and seem to have little clue what Tarski is for. They don't relate to any examples we give, whether fictitious or actual personal examples from our lives. Some of these people show signs of being rather high-level rationalists overall, although some don't.
I wish there was a more standard term for this than "kinesthetic thinking", that other people would be able to look up and understand what was meant.
(A related term is "motor cognition", but that doesn't denote a thinking style. Motor cognition is a theoretical paradigm in cognitive psychology, according to which most cognition is a kind of higher-order motor control/planning activity, connected in a continuous hierarchy with conventional concrete motor control and based on the same method of neural implementation. (See also: precuneus (reflective cognition?); compare perceptual control theory.) Another problem with the term "motor cognition" is that it doesn't convey the important nuance of "higher-order motor planning except without necessarily any concurrent processing of any represented concrete motions". (And the other would-be closest option, "kinesthetic learning", actively denotes the opposite.)
Plausibly, people could be trained to introspectively attend to the aspect of cognition which was like motor planning with a combination of TCMS, to inhibit visual and auditory imagery, and cognitive tasks which involved salient constraints and tradeoffs. Maybe the cognitive tasks would also need to have specific positive or negative consequences for apparent execution of recognizable scripts of sequential actions typical of normally learned plans for the task. Some natural tasks, which are not intrinsically verbal or visual, with some of these features would be social reasoning, mathematical proof planning, or software engineering.)
I think kinesthetic thinking still has things like rationalization. For example, you might have to commit to regarding a certain planned action a certain way as part of a complex motivational gambit, with the side effect that you commit to pretend that the action will have some other expected value than the one you would normally assign. If this ability to make commitments that affect perceived expected value can be used well, then by default this ability is probably also being used badly.
Could you give more details about the things like rationalization that you were thinking of, and what it feels like deciding not to do them in kinesthetic thinking?