Vladimir_Nesov comments on Expected futility for humans - Less Wrong

11 [deleted] 09 June 2009 12:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 09 June 2009 05:02:22PM 6 points [-]

I generally agree, but I challenge the claim that the (mostly social) failures of conscious consequentialist reasoning are just a matter of speed-of-calculation versus a cached rule. In most social situations, one or several such rules feel particularly salient to our decisionmaking at any moment, but the process by which these particular rules seem salient is the essence of our real (unconscious) calculations.

We already have a well-developed neural framework for social situations, and a conscious calculation of utility is unlikely to outperform that framework across that domain (though it can lead to occasional insights that the heuristics of the framework miss). Compare it to our native 'physics engine' that allows us to track objects, move around and even learn to catch a ball, versus the much slower and mistake-prone calculations that can still give the better answer when faced with truly novel situations (our intuitive physics is wrong about what happens to a helium balloon in the car when you slam on the brakes, but a physics student can get the right answer with conscious thought).

I suggest that attempting to live one's entire life by either conscious expected-utility maximization or even by execution of consciously-chosen low-level heuristics is going to work out badly. What works better for human beings is to generally trust the unconscious in familiar social domains, but to observe and analyze ourselves periodically in order to identify (and hopefully patch) some deleterious biases. We should also try to rely more on conscious Bayesian reasoning in domains (like scientific controversies or national politics) that were unlikely to be optimized for in the ancestral environment.

This leaves aside, of course, the question of what to do when one's conscious priorities seem to oppose one's unconscious priorities (which they do, not in most things, but in some crucial matters).

Comment author: Vladimir_Nesov 09 June 2009 06:04:08PM *  0 points [-]

We already have a well-developed neural framework for social situations, and a conscious calculation of utility is unlikely to outperform that framework across that domain

It's not about outperforming, it's about improvement on what you have. There is no competition, incoherence is indisputably wrong wherever it appears. Only if the time spent reflecting on coherence of decisions could be better spent elsewhere is there a tradeoff, but the other activity doesn't need to be identified with "instinctive decision-making", it might as well be hunting or sleeping.

Comment author: orthonormal 09 June 2009 06:33:21PM 0 points [-]

The context here is of an aspiring rationalist trying to consciously plan and follow a complete social strategy, and rejecting their more basic intuitions about how they should act in favor of their consequentialist calculus. This sort of conscious engineering often fails spectacularly, as I can attest. (The usual exceptions are heuristics that have been tested and passed on by others, and are more likely to succeed not because of their rational appeal relative to other suggestions but rather because of their optimization by selection.)

Comment author: Vladimir_Nesov 09 June 2009 06:56:08PM 0 points [-]

Then they are reaching out too much, using the tool incorrectly, confusing themselves instead of fixing the problems. Note that conscious planning is also mostly intuition, not expected utility maximization, and you've just magnified on the incoherence of the practice of applying it where the consequence of such act is failure, while the goal is success.