Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Most of you are probably familiar with the two contrasting decision making strategies "maximizing" and "satisficing", but a short recap won't hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that is good enough, i.e. that meets or exceeds a certain threshold of acceptability. In contrast, maximizing means the tendency to search for so long until the best possible option is found.
Research indicates (Schwartz et al., 2002) that there are individual differences with regard to these two decision making strategies. That is, some individuals – so called ‘maximizers’ – tend to extensively search for the optimal solution. Other people – ‘satisficers’ – settle for good enough1. Satisficers, in contrast to maximizers, tend to accept the status quo and see no need to change their circumstances2.
When the subject is raised, maximizing usually gets a bad rap. For example, Schwartz et al. (2002) found "negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret."
Maximisers miss out on the psychological benefits of commitment, leaving them less satisfied than their more contented counterparts, the satisficers. ...Current research is trying to understand whether they can change. High-level maximisers certainly cause themselves a lot of grief.
I beg to differ. Satisficers may be more content with their lives, but most of us don't live for the sake of happiness alone. Of course, satisficing makes sense when not much is at stake3. However, maximizing also can prove beneficial, for the maximizers themselves and for the people around them, especially in the realm of knowledge, ethics, relationships and when it comes to more existential issues – as I will argue below4.
Belief systems and Epistemology
Ideal rationalists could be thought of as epistemic maximizers: They try to notice slight inconsistencies in their worldview, take ideas seriously, beware wishful thinking, compartmentalization, rationalizations, motivated reasoning, cognitive biases and other epistemic sins. Driven by curiosity, they don't try to confirm their prior beliefs, but wish to update them until they are maximally consistent and maximally correspondent with reality. To put it poetically, ideal rationalists as well as great scientists don't content themselves to wallow in the mire of ignorance but are imbued with the Faustian yearning to ultimately understand whatever holds the world together in its inmost folds.
In contrast, consider the epistemic habits of the average Joe Christian: He will certainly profess that having true beliefs is important to him. But he doesn't go to great lengths to actually make this happen. For example, he probably believes in an omnipotent and beneficial being that created our universe. Did he impartially weigh all available evidence to reach this conclusion? Probably not. More likely is that he merely shares the beliefs of his parents and his peers. However, isn't he bothered by the problem of evil or Occam's razor? And what about all those other religions whose adherents believe with the same certainty in different doctrines?
Many people don’t have good answers to these questions. Their model of how the world works is neither very coherent nor accurate but it's comforting and good enough. They see little need to fill the epistemic gaps and inconsistencies in their worldview or to search for a better alternative. Thus, one could view them as epistemic satisficers. Of course, all of us exhibit this sort of epistemic laziness from time to time. In the words of Jonathan Haidt (2013):
We take a position, look for evidence that supports it, and if we find some evidence—enough so that our position “makes sense”—we stop thinking.
Let's go back to average Joe: he presumably obeys the dictates of the law and his religion and occasionally donates to (ineffective) charities. Joe probably thinks that he is a “good” person and many people would likely agree. This leads us to an interesting question: how do we typically judge the morality of our own actions?
Let's delve into the academic literature and see what it has to offer: In one exemplary study, Sachdeva et al. (2009) asked participants to write a story about themselves using either morally positive words (e.g. fair, nice) or morally negative words (e.g. selfish, mean). Afterwards, the participants were asked if and how much they would like to donate to a charity of their choice. The result: Participants who wrote a story containing the positive words donated only one fifth as much as those who wrote a story with negative words.
This effect is commonly referred to as moral licensing: People with a recently boosted moral self-concept feel like they have done enough and see no need to improve the world even further. Or, as McGonigal (2011) puts it (emphasis mine):
When it comes to right and wrong, most of us are not striving for moral perfection. We just want to feel good enough – which then gives us permission to do whatever we want.
Another well known phenomenon is scope neglect. One explanation for scope neglect is the "purchase of moral satisfaction" proposed by Kahneman and Knetsch (1992): Most people don't try to do as much good as possible with their money, they only spend just enough cash to create a "warm-fuzzy feeling" in themselves.
Phenomenons like "moral licensing" and "purchase of moral satisfaction" indicate that it is all too human to only act as altruistic as is necessary to feel or seem good enough. This could be described as "ethical satisficing" because people just follow the course of action that meets or exceeds a certain threshold of moral goodness. They don't try to carry out the morally optimal action or an approximation thereof (as measured by their own axiology).
I think I cited enough academic papers in the last paragraphs so let's get more speculative: Many, if not most people5 tend to be intuitive deontologists6. Deontology basically posits that some actions are morally required, and some actions are morally forbidden. As long as you do perform the morally required ones and don't engage in morally wrong actions you are off the hook. There is no need to do more, no need to perform supererogatory acts. Not neglecting your duties is good enough. In short, deontology could also be viewed as ethical satisficing (see footnote 7 for further elaboration).
In contrast, consider deontology's arch-enemy: Utilitarianism. Almost all branches of utilitarianism share the same principal idea: That one should maximize something for as many entities as possible. Thus, utilitarianism could be thought of as ethical maximizing8.
Effective altruists are an even better example for ethical maximizers because they actually try to identify and implement (or at least pretend to try) the most effective approaches to improve the world. Some conduct in-depth research and compare the effectiveness of hundreds of different charities to find the ones that save the most lives with as little money as possible. And rumor has it there are people who have even weirder ideas about how to ethically optimize literally everything. But more on this later.
Friendships and conversations
Humans intuitively assume that the desires and needs of other people are similar to their own ones. Consequently, I thought that everyone secretly yearns to find like-minded companions with whom one can talk about one’s biggest hopes as well as one’s greatest fears and form deep, lasting friendships.
But experience tells me that I was probably wrong, at least to some degree: I found it quite difficult to have these sorts of conversations with a certain kind of people, especially in groups (luckily, I’ve found also enough exceptions). It seems that some people are satisfied as long as their conversations meet a certain, not very high threshold of acceptability. Similar observations could be made about their friendships in general. One could call them social or conversational satisficers. By the way, this time research actually suggests that conversational maximizing is probably better for your happiness than small talk (Mehl et al., 2008).
Interestingly, what could be called "pluralistic superficiality" may account for many instances of small talk and superficial friendships since everyone experiences this atmosphere of boring triviality but thinks that the others seem to enjoy the conversations. So everyone is careful not to voice their yearning for a more profound conversation, not realizing that the others are suppressing similar desires.
Crucial Considerations and the Big Picture
On to the last section of this essay. It’s even more speculative and half-baked than the previous ones, but it may be the most interesting, so bear with me.
Research suggests that many people don’t even bother to search for answers to the big questions of existence. For example, in a representative sample of 603 Germans, 35% of the participants could be classified as existentially indifferent, that is they neither think their lives are meaningful nor suffer from this lack of meaning (T. Schnell, 2008).
The existential thirst of the remaining 65% is presumably harder to satisfy, but how much harder? Many people don't invest much time or cognitive resources in order to ascertain their actual terminal values and how to optimally reach them – which is arguably of the utmost importance. Instead they appear to follow a mental checklist containing common life goals (one could call them "cached goals") such as a nice job, a romantic partner, a house and probably kids. I’m not saying that such goals are “bad” – I also prefer having a job to sleeping under the bridge and having a partner to being alone. But people usually acquire and pursue their (life) goals unsystematically and without much reflection which makes it unlikely that such goals exhaustively reflect their idealized preferences. Unfortunately, many humans are so occupied by the pursuit of such goals that they are forced to abandon further contemplation of the big picture.
Furthermore, many of them lack the financial, intellectual or psychological capacities to ponder complex existential questions. I'm not blaming subsistence farmers in Bangladesh for not reading more about philosophy, rationality or the far future. But there are more than enough affluent, highly intelligent and inquisitive people who certainly would be able to reflect about crucial considerations. Instead, they spend most of their waking hours maximizing nothing but the money in their bank accounts or interpreting the poems of some arabic guy from the 7th century9.
Generally, many people seem to take the current rules of our existence for granted and content themselves with the fundamental evils of the human condition such as aging, needless suffering or death. Whatever the reason may be, they don't try to radically change the rules of life and their everyday behavior seems to indicate that they’ve (gladly?) accepted their current existence and the human condition in general. One could call them existential satisficers.
Contrast this with the mindset of transhumanism. Generally, transhumanists are not willing to accept the horrors of nature and realize that human nature itself is deeply flawed. Thus, transhumanists want to fundamentally alter the human condition and aim to eradicate, for example, aging, unnecessary suffering and ultimately death. Through various technologies transhumanists desire to create an utopia for everyone. Thus, transhumanism could be thought of as existential maximizing10.
However, existential maximizing and transhumanism are not very popular. Quite the opposite, existential satisficing – accepting the seemingly unalterable human condition – has a long philosophical tradition. To give some examples: The otherwise admirable Stoics believed that the whole universe is pervaded and animated by divine reason. Consequently, one should cultivate apatheia and calmly accept one's fate. Leibniz even argued that we already live in the best of all possible worlds. The mindset of existential satisficing can also be found in Epicureanism and arguably in Buddhism. Lastly, religions like Christianity or Islam are generally against transhumanism, partly because this amounts to “playing God”. Which is understandable from their point of view because why bother fundamentally transforming the human condition if everything will be perfect in heaven anyway?
One has to grant ancient philosophers that they couldn't even imagine that one day humanity would acquire the technological means to fundamentally alter the human condition. Thus it is no wonder that Epicurus argued that death is not to be feared or that the Stoics believed that disease or poverty are not really bad: It is all too human to invent rationalizations for the desirability of actually undesirable, but (seemingly) inevitable things – be it death or the human condition itself.
But many contemporary intellectuals can't be given the benefit of the doubt. They argue explicitly against trying to change the human condition. To name a few: Bernard Williams believed that death gives life meaning. Francis Fukuyama called transhumanism the world's most dangerous idea. And even Richard Dawkins thinks that the fear of death is "whining" and that the desire for immortality is "presumptuous"11:
Be thankful that you have a life, and forsake your vain and presumptuous desire for a second one.
With all that said, "run-off-the-mill" transhumanism arguably still doesn't go far enough. There are at least two problems I can see: 1) Without a benevolent superintelligent singleton "Moloch" (to use Scott Alexander's excellent wording) will never be defeated. 2) We are still uncertain about ontology, decision theory, epistemology and our own terminal values. Consequently, we need some kind of process which can help us to understand those things or we will probably fail to rearrange reality until it conforms with our idealized preferences.
Therefore, it could be argued that the ultimate goal is the creation of a benevolent superintelligence or Friendly AI (FAI) whose values are aligned with ours. There are of course numerous objections to the whole superintelligence strategy in general and to FAI in particular, but I won’t go into detail here because this essay is already too long.
Nevertheless – however unlikely – it seems possible that with the help of a benevolent superintelligence we could abolish all gratuitous suffering and achieve an optimal mode of existence. We could become posthuman beings with god-like intellects, our ecstasy outshining the surrounding stars, and transforming the universe until one happy day all wounds are healed, all despair dispelled and every (idealized) desire fulfilled. To many this seems like sentimental and wishful eschatological speculation but for me it amounts to ultimate existential maximizing12, 13.
The previous paragraphs shouldn’t fool one into believing that maximizing has no serious disadvantages. The desire to aim higher, become stronger and to always behave in an optimally goal-tracking way can easily result in psychological overload and subsequent surrender. Furthermore, it seems that adopting the mindset of a maximizer increases the tendency to engage in upward social comparisons and counterfactual thinking which contribute to depression as research has shown.
Moreover, there is much to be learnt from stoicism and satisficing in general: Life isn't always perfect and there are things one cannot change; one should accept one's shortcomings – if they are indeed unalterable; one should make the best of one's circumstances. In conclusion, better be a happy satisficer whose moderate productivity is sustainable than be a stressed maximizer who burns out after one year. See also these two essays which make similar points.
All that being said, I still favor maximizing over satisficing. If our ancestors had all been satisficers we would still be picking lice off each other’s backs14. And only by means of existential maximizing can we hope to abolish the aforementioned existential evils and all needless suffering – even if the chances seem slim.
[Originally posted a longer, more personal version of this essay on my own blog]
 Obviously this is not a categorical classification, but a dimensional one.
 To put it more formally: The utility function of the ultimate satisficer would assign the same (positive) number to each possible world, i.e. the ultimate satisficer would be satisfied with every possible world. The less possible worlds you are satisfied with (i.e. the higher your threshold of acceptability), the less possible worlds exist between which you are indifferent, the less of a satisficer and the more of a maximizer you are. Also note: Satisficing is not irrational in itself. Furthermore, I’m talking about the somewhat messy psychological characteristics and (revealed) preferences of human satisficers/maximizers. Read these posts if you want to know more about satisficing vs. maximizing with regard to AIs.
 Rational maximizers take the value of information and opportunity costs into account.
 Instead of "maximizer" I could also have used the term "optimizer".
 E.g. in the "Fat Man" version of the famous trolley dilemma, something like 90% of subjects don't push a fat man onto the track, in order to save 5 other people. Also, utilitarians like Peter Singer don't exactly get rave reviews from most folks. Although there is some conflicting research (Johansson-Stenman, 2012). Furthermore, the deontology vs. utilitarianism distinction itself is limited. See e.g. "The Righteous Mind" by Jonathan Haidt.
 Of course, most people are not strict deontologists. They are also intuitive virtue ethicists and care about the consequences of their actions.
 Admittedly, one could argue that certain versions of deontology are about maximally not violating certain rules and thus could be viewed as ethical maximizing. However, in the space of all possible moral actions there exist many actions between which a deontologist is indifferent, namely all those actions that exceed the threshold of moral acceptability (i.e. those actions that are not violating any deontological rule). To illustrate this with an example: Visiting a friend and comforting him for 4 hours or using the same time to work and subsequently donating the earned money to a charity are both morally equivalent from the perspective of (many) deontological theories – as long as one doesn’t violate any deontological rule in the process. We can see that this parallels satisficing.
Contrast this with (classical) utilitarianism: In the space of all possible moral actions there is only one optimal moral action for an utilitarian and all other actions are morally worse. An (ideal) utilitarian searches for and implements the optimal moral action (or tries to approximate it because in real life one is basically never able to identify, let alone carry out the optimal moral action). This amounts to maximizing. Interestingly, this inherent demandingness has often been put forward as a critique of utilitarianism (and other sorts of consequentialism) and satisficing consequentialism has been proposed as a solution (Slote, 1984). Further evidence for the claim that maximizing is generally viewed with suspicion.
 The obligatory word of caution here: following utilitarianism to the letter can be self-defeating if done in a naive way.
 Nick Bostrom (2014) expresses this point somewhat harshly:
A colleague of mine likes to point out that a Fields Medal (the highest honor in mathematics) indicates two things about the recipient: that he was capable of accomplishing something important, and that he didn't.
As a general point: Too many people end up as money-, academia-, career- or status-maximizers although those things often don’t reflect their (idealized) preferences.
 Of course there are lots of utopian movements like socialism, communism or the Zeitgeist movement. But all those movements make the fundamental mistake of ignoring or at least heavily underestimating the importance of human nature. Creating utopia merely through social means is impossible because most of us are, by our very nature, too selfish, status-obsessed and hypocritical and cultural indoctrination can hardly change this. To deny this, is to simply misunderstand the process of natural selection and evolutionary psychology. Secondly, even if a socialist utopia were to come true, there still would exist unrequited love, disease, depression and of course death. To abolish those things one has to radically transform the human condition itself.
 Here is another quote:
We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born. [….] We privileged few, who won the lottery of birth against all odds, how dare we whine at our inevitable return to that prior state from which the vast majority have never stirred?
― Richard Dawkins in "Unweaving the Rainbow"
 It’s probably no coincidence that Yudkowsky named his blog "Optimize Literally Everything" which adequately encapsulates the sentiment I tried to express here.
 Those interested in or skeptical of the prospect of superintelligent AI, I refer to "Superintelligence: Paths, Dangers and Strategies" by Nick Bostrom.
 I stole this line from Bostrom’s “In Defense of Posthuman Dignity”.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Haidt, J. (2013). The righteous mind: Why good people are divided by politics and religion. Random House LLC.
Johansson-Stenman, O. (2012). Are most people consequentialists? Economics Letters, 115 (2), 225-228.
Kahneman, D., & Knetsch, J. L. (1992). Valuing public goods: the purchase of moral satisfaction. Journal of environmental economics and management, 22(1), 57-70.
McGonigal, K. (2011). The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do to Get More of It. Penguin.
Mehl, M. R., Vazire, S., Holleran, S. E., & Clark, C. S. (2010). Eavesdropping on Happiness Well-Being Is Related to Having Less Small Talk and More Substantive Conversations. Psychological Science, 21(4), 539-541.
Sachdeva, S., Iliev, R., & Medin, D. L. (2009). Sinning saints and saintly sinners the paradox of moral self-regulation. Psychological science, 20(4), 523-528.
Schnell, T. (2010). Existential indifference: Another quality of meaning in life. Journal of Humanistic Psychology, 50(3), 351-373.
Schwartz, B. (2000). Self determination: The tyranny of freedom. American Psychologist, 55, 79–88.
Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., & Lehman, D. R. (2002). Maximizing versus satisficing: happiness is a matter of choice. Journal of personality and social psychology, 83(5), 1178.
Slote, M. (1984). “Satisficing Consequentialism”. Proceedings of the Aristotelian Society, 58: 139–63.
The purpose of this essay is to propose an enriched framework of thinking to help optimize the pursuit of agency, the quality of living intentionally. I posit that pursuing and gaining agency involves 3 components:
1. Evaluating reality clearly, to
2. Make effective decisions, that
3. Achieve our short and long-term goals.
In other words, agency refers to the combination of assessing reality accurately and achieving goals effectively, epistemic and instrumental rationality. The essay will first explore the concept of agency more thoroughly, and will then consider the application of this concept in different life domains, by which I mean different life areas such as work, romance, friendships, fitness, leisure, and other domains.
The concepts laid out here sprang from a collaboration between myself and Don Sutterfield, and also discussions with Max Harms, Rita Messer, Carlos Cabrera, Michael Riggs, Ben Thomas, Elissa Fleming, Agnes Vishnevkin, Jeff Dubin, and other members of the Columbus, OH, Rationality Meetup, as well as former members of this Meetup such as Jesse Galef and Erica Edelman. Members of this meetup are also collaborating to organize Intentional Insights, a new nonprofit dedicated to raising the sanity waterline through popularizing Rationality concepts in ways that create cognitive ease for a broad public audience (for more on Intentional Insights, see a fuller description here).
This section describes a framework of thinking that helps assess reality accurately and achieve goals effectively, in other words gain agency. After all, insofar as human thinking suffers from many biases, working to achieve greater agenty-ness would help us lead better lives. First, I will consider agency in relation to epistemic rationality, and then instrumental rationality: while acknowledging fully that these overlap in some ways, I believe it is helpful to handle them in distinct sections.
This essay proposes that gaining agency from the epistemic perspective involves individuals making an intentional evaluation of their environment and situation, in the moment and more broadly in life, sufficient to understand the full extent of one’s options within it and how these options relate to one’s personal short-term and long-term goals. People often make their decisions, both in the moment and major life decisions, based on socially-prescribed life paths and roles, whether due to the social expectations imposed by others or internalized preconceptions, often a combination of both. Such socially-prescribed life roles limit one’s options and thus the capacity to optimize one’s utility in reaching personal goals and preferences. Instead of going on autopilot in making decisions about one’s options, agency involves intentionally evaluating the full extent of one’s options to pursue the ones most conducive to one’s actual personal goals. To be clear, this may often mean choosing options that are socially prescribed, if they also happen to fit within one’s goal set. This intentional evaluation also means updating one’s beliefs based on evidence and facing the truth of reality even when it may seem ugly.
By gaining agency from the instrumental perspective, this essay refers to the ability to achieve one’s short-term and long-term goals. Doing so requires that one first gain a thorough understanding of one’s short-term and long-term goals, through an intentional process of self-evaluation of one’s values, preferences, and intended life course. Next, it involves learning effective strategies to make and carry out decisions conducive to achieving one’s personal goals and thus win at life. In the moment, that involves having an intentional response to situations, as opposed to relying on autopilot reflexes. This statement certainly does not mean going by System 2 at all times, as doing so would lead to rapid ego depletion, whether through actual willpower drain or through other related mechanisms. Agency involves using System 2 to evaluate System 1 and decide when one’s System 1 may be trusted to make good enough decisions and take appropriate actions with minimal oversight, in other words when System 1 has functional cached thinking, feeling, and behavior patterns. In cases where System 1 habits are problematic, agency involves using System 2 to change System 1 habits into more functional ones conducive to one’s goal set, not only behaviors but also changing one's emotions and thoughts. For the long term, agency involves intentionally making plans about one’s time and activities so that one can accomplish one’s goals. This involves learning about and adopting intentional strategies for discovering, setting, and achieving your goals, and implementing these strategies effectively in your life on a daily level.
Much of the discourse on agency in Rationality circles focuses on this notion as a broad category, and the level of agenty-ness for any individual is treated as a single point on a broad continuum of agency (she’s highly agenty, 8/10; he’s not very agenty, 3/10). After all, if someone has a thorough understanding of the concept of agency as demonstrated by the way they talk about agency and goal achievement, combined with their actual abilities to solve problems and achieve their goals in life domains such as their career or romantic relationships, then that qualifies that individual as a pretty high-level agent, right? Indeed, this is what I and others in the Columbus Rationality Meetup believed in the past about agency.
However, in an insight that now seems obvious to us (hello, hindsight bias) and may seem obvious to you after reading this post, we have come to understand that this is far from the case: in other words, just because someone has a high level of agency and success in one life domain does not mean that they have agency in other domains. Our previous belief that those who understand the concept of agency well and seem highly agenty in one life domain created a dangerous halo effect in evaluating individuals. This halo effect led to highly problematic predictions and normative expectations about the capacities of others, which undermined social relationships through creating misunderstandings, conflicts, and general interpersonal stress. This halo effect also led to highly problematic predictions and normative expectations about ourselves when highly inflated conceptions of our personal capacities in each given life domain contrasted with consequent mistakes in efforts at optimization that resulted in losses of time, energy, motivation, and personal stress.
Since that realization, we have come across studies on the difference between rationality and intelligence, as well as on broader re-evaluations of dual process theory, and also on the difference between task-oriented thinking and socio-relationship thinking, indicating the usefulness of parsing out the heuristic of “smart” and “rational,” and examining the various skills and abilities covered by that term. However, such research has not yet explored how significant skill in rational thinking and agency in one life domain may (or may not) transfer to those same skills and abilities in other areas of life. In other words, individuals may not be intentional and agenty about their application of rational thinking across various life domains, something that might be conveyed through the term “intentionality quotient.” So let me tell you a bit about ourselves as case studies in how the concept of domains of agency has proved to be useful in thinking rationally about our lives and gaining agency more quickly and effectively in varied domains.
For example, I have a high level of agency in my career area and in time management and organization, both knowing quite a lot about these areas and achieving my goals within them pretty well. Moreover, I am thoroughly familiar with the concept of agency, both from the Rationality perspective and from my own academic research. From that, I and others who know me expect me to express high levels of agency across all of my life domains.
However, I have many challenges in being rational about maximizing my utility gains in relationships with others. Only relatively recently, within the last couple of years or so, have I began to consider and pursue intentional efforts to reflect on the value that relationships with others has for my life. These intentional efforts resulted from conversations with members of the Columbus Rationality Meetup about their own approaches to relationships, and reading Less Wrong posts on the topic of relationships. As a result of these efforts, I have begun to deliberately invest resources into cultivating some relationships while withdrawing from others. My System 1 self still has a pretty strong ugh field about doing the latter, and my System 2 has to have a very serious talk with my System 1 every time I make a move to distance myself from extant relationships that no longer serve me well.
This personal example illustrates one major reason why people who have a high level of agency in one life domain may not have it in another life domain. Namely, “ugh” fields and cached thinking patterns prevent many who are quite rational and utility-optimizing in certain domains from applying the same level of intentional analysis to another life domain. For myself, as an introverted bookish child, I had few friends. This was further exacerbated by my family’s immigration to the United States from the former Soviet Union when I was 10, with the consequent deep disruption of interpersonal social development. Thus, my cached beliefs about relationships and my role in them served me poorly in optimizing relationship utility, and only with significant struggle can I apply rational analysis and intentional decision-making to my relationship circles. Still, since starting to apply rationality to my relationships here, I have substantially leveled up my abilities in that domain.
Another major reason why people who have a high level of agency in one life domain may not have it in another life domain results from the fact that people have domain-specific vulnerabilities to specific kinds of biases and cognitive distortions. For example, despite knowing quite a bit about self-control and willpower management, I suffer from challenges managing impulse control over food. I have worked to apply both rational analysis and proven habit management and change strategies to modify my vulnerability to the Kryptonite of food and especially sweets. I know well what I should be doing to exhibit greater agency in that field and have made very slow progress, but the challenges in that domain continually surprise me.
My assessment of my level of agency, which sprang from the areas where I had high agency, caused me to greatly overestimate my ability to optimize in areas where I had low levels of agency, e.g., in relationships and impulse control. As a result, I applied incorrect strategies to level up in those domains, and caused myself a great deal of unnecessary stress, and much loss of time, energy, and motivation.
My realization of the differentiated agency I had across different domains resulted in much more accurate evaluations and optimization strategies. For some domains, such as relationships, the problem resulted primarily from a lack of rational self-reflection. This suggests one major fix to differentiated levels of agency across different life domains – namely, a project that involves rationally evaluating one’s utility optimization in each life area. For some domains, the problem stems from domain-specific vulnerability to certain biases, and that requires applying self-awareness, data gathering, and tolerance toward one’s personally slow optimization in these areas.
My evaluation of the levels of agency of others underwent a similar transformation after the realization that they had different levels of agency in different life domains. Previously, mistaken assessments resulting from the halo effect about agency undermined my social relationships through misunderstandings, conflicts, and general interpersonal stress. For instance, before this realization I found it difficult to understand how one member of the Columbus Rationality Meetup excelled in some life areas, such as managing relationships and social interactions, but suffered from deep challenges in time management and organization. Caring about this individual deeply as a close friend and collaborator, I invested much time and energy resources to help improve this life domain. The painfully slow improvement and many setbacks experienced by this individual caused me to experience much frustration and stress, and resulted in conflicts and tensions between us. However, after making the discovery of differentiated agency across domains, I realized that not only was such frustration misplaced, but that the strategies I was suggesting were targeted too high for this individual, in this domain. A much more accurate assessment of his current capacities and the actual efforts required to level up resulted in much less interpersonal stress and much more effective strategies that helped this individual. Besides myself, other Columbus Rationality Meetup members have experienced similar benefits in applying this paradigm to themselves and to others.
To sum up, this essay provided an overview and some strategies for achieving greater agency - a highly instrumental framework of thinking that helps empower individuals to optimize their ability to assess reality accurately and achieve goals effectively. The essay in particular aims to enrich current discourse on agency by highlighting how individuals have different levels of agency across various life domains, and underscoring the epistemic and instrumental implications of this perspective on agency. While the strategies listed above help achieve specific skills and abilities required to gain greater agency, I would suggest that one can benefit greatly from tying positive emotions to the framework of thinking about agency described above. For instance, one might think to one’s self, “It is awesome to take an appropriately fine grained perspective on how agency works, and I’m awesome for dedicating cycles to that project.” Doing so motivates one’s System 1 to pursue increasing levels of agency: it’s the emotionally rational step to assess reality accurately, achieve goals effectively, and thus gain greater agency in all life domains.
One of the central focuses of LW is instrumental rationality. It's been suggested, rather famously, that this isn't about having true beliefs, but rather its about "winning". Systematized winning. True beliefs are often useful to this goal, but an obsession with "truthiness" is seen as counter-productive. The brilliant scientist or philosopher may know the truth, yet be ineffective. This is seen as unacceptable to many who see instrumental rationality as the critical path to achieving one's goals. Should we all discard our philosophical obsession with the truth and become "winners"?
The River Instrumentus
You are leading a group of five people away from deadly threat which is slowly advancing behind you. You come to a river. It looks too dangerous to wade through, but through the spray of the water you see a number of stones. They are dotted across the river in a way that might allow you to cross. However, the five people you are helping are extremely nervous and in order to convince them to cross, you will not only have to show them its possible to cross, you will also need to look calm enough after doing it to convince them that it's safe. All five of them must cross, as they insist on living or dying together.
Just as you are about to step out onto the first stone it splutters and moves in the mist of the spraying water. It looks a little different from the others, now you think about it. After a moment you realise its actually a person, struggling to keep their head above water. Your best guess is that this person would probably drown if they got stepped on by five more people. You think for a moment, and decide that, being a consequentialist concerned primarily with the preservation of life, it is ultimately better that this person dies so the others waiting to cross might live. After all, what is one life compared with five?
However, given your need for calm and the horror of their imminent death at your hands (or feet), you decide it is better not to think of them as a person, and so you instead imagine them being simply a stone. You know you'll have to be really convincingly calm about this, so you look at the top of the head for a full hour until you utterly convince yourself that the shape you see before you is factually indicitative not of a person, but of a stone. In your mind, tops of heads aren't people - now they're stones. This is instrumentally rational - when you weigh things up the self-deception ultimately increases the number of people who will likely live, and there is no specific harm you can identify as a result.
After you have finished convincing yourself you step out onto the per... stone... and start crossing. However, as you step out onto the subsequent stones, you notice they all shift a little under your feet. You look down and see the stones spluttering and struggling. You think to yourself "lucky those stones are stones and not people, otherwise I'd be really upset". You lead the five very greatful people over the stones and across the river. Twenty dead stones drift silently downstream.
When we weigh situations on pure instrumentality, small self deception makes sense. The only problem is, in an ambiguous and complex world, self-deceptions have a notorious way of compounding eachother, and leave a gaping hole for cognitive bias to work its magic. Many false but deeply-held beliefs throughout human history have been quite justifiable on these grounds. Yet when we forget the value of truth, we can be instrumental, but we are not instrumentally rational. Rationality implies, or ought to imply, a value of the truth.
Winning and survival
In the jungle of our evolutionary childhood, humanity formed groups to survive. In these groups there was a hierachy of importance, status and power. Predators, starvation, rival groups and disease all took the weak on a regular basis, but the groups afforded a partial protection. However, a violent or unpleasant death still remained a constant threat. It was of particular threat to the lowest and weakest members of the group. Sometimes these individuals were weak because they were physically weak. However, over time groups that allowed and rewarded things other than physical strength became more successful. In these groups, discussion played a much greater role in power and status. The truely strong individuals, the winners in this new arena were one's that could direct converstation in their favour - conversations about who will do what, about who got what, and about who would be punished for what. Debates were fought with words, but they could end in death all the same.
In this environment, one's social status is intertwined with one's ability to win. In a debate, it was not so much a matter of what was true, but of what facts and beliefs achieved one's goals. Supporting the factual position that suited one's own goals was most important. Even where the stakes where low or irrelevant, it payed to prevail socially, because one's reputation guided others limited cognition about who was best to listen to. Winning didn't mean knowing the most, it meant social victory. So when competition bubbled to the surface, it payed to ignore what one's opponent said and instead focus on appearing superior in any way possible. Sure, truth sometimes helped, but for the charismatic it was strictly optional. Politics was born.
Yet as groups got larger, and as technology began to advance for the first time, there appeared a new phenomenon. Where a group's power dynamics meant that it systematically had false beliefs, it became more likely to fail. The group that believing that fire spirits guided a fire's advancement fared poorly compared with those who checked the wind and planned their means of escape accordingly. The truth finally came into its own. Yet truth, as opposed to simple belief by politics, could not be so easily manipulated for personal gain. The truth had no master. In this way it was both dangerous and liberating. And so slowly but surely the capacity for complex truth-pursuit became evolutionarily impressed upon the human blueprint.
However, in evolutionary terms there was little time for the completion of this new mental state. Some people had it more than others. It also required the right circumstances for it to rise to the forefront of human thought. And other conditions could easily destroy it. For example, should a person's thoughts be primed with an environment of competition, the old ways came bubbling up to the surface. When a person's environment is highly competitive, it reverts to its primitive state. Learning and updating of views becomes increasingly difficult, because to the more primitive aspects of a person's social brain, updating one's views is a social defeat.
When we focus an organisation's culture on winning, there can be many benefits. It can create an air of achievement, to a degree. Hard work and the challenging of norms can be increased. However, we also prime the brain for social conflict. We create an environment where complexity and subtlety in conversation, and consequently in thought, is greatly reduced. In organisations where the goals and means are largely intellectual, a competitive environment creates useless conversations, meaningless debates, pointless tribalism, and little meaningful learning. There are many great examples, but I think you'd be best served watching our elected representatives at work to gain a real insight.
Rationality and truth
Rationality ought to contain an implication of truthfulness. Without it, our little self-deceptions start to gather and compond one another. Slowly but surely, they start to reinforce, join, and form an unbreakable, unchallengable yet utterly false belief system. I need not point out the more obvious examples, for in human society, there are many. To avoid this on LW and elsewhere, truthfulness of belief ought to inform all our rational decisions, methods and goals. Of course true beliefs do not guarantee influence or power or achievement, or anything really. In a world of half-evolved truth-seeking equipment, why would we expect that? What we can expect is that, if our goals are anything to do with the modern world in all its complexity, the truth isn't sufficient, but it is neccessary.
Instrumental rationality is about achieving one's goals, but in our complex world goals manifest in many ways - and we can never really predict how a false belief will distort our actions to utterly destroy our actual achievements. In the end, without truth, we never really see the stones floating down the river for what they are.
A specific bias that Lesswrongers may often get from fiction is the idea that power is proportional to difficulty. The more power something gives you, the harder it should be to get, right?
A mediocre student becomes a powerful mage through her terrible self-sacrifice and years of studying obscure scrolls. Even within the spells she can cast, the truly world-altering ones are those that demand the most laborious preparation, the most precise gestures, and the longest and most incomprehensible stream of syllables. A monk makes an arduous journey to ancient temples and learns secret techniques of spiritual oneness and/or martial asskickery, which require great dedication and self-knowledge. Otherwise, it would be cheating. The whole process of leveling up, of adding ever-increasing modifiers to die rolls, is based on the premise that power comes to those who do difficult things. And it's failsafe - no matter what you put your skill points in, you become better at something. It's a training montage, or a Hero's journey. As with other fictional evidence, these are not "just stories" -- they are powerful cultural narratives. This kind of narrative shapes moral choices and identity. So where do we see this reflected in less obviously fictional contexts?
There's the rags-to-riches story -- the immigrant who came with nothing, but by dint of hard work, now owns a business. University engineering programs are notoriously tough, because you are gaining the ability to do a lot of things (and for signalling reasons). A writer got to where she is today because she wrote and revised and submitted and revised draft after draft after draft.
In every case, there is assumed to be a direct causal link between difficulty and power. Here, these are loosely defined. Roughly, "power" means "ability to have your way", and "difficulty" is "amount of work & sacrifice required." These can be translated into units of social influence - a.k.a money -- and investment, a.k.a. time, or money. In many cases, power is set by supply and demand -- nobody needs a wizard if they can all cast their own spells, and a doctor can command much higher prices if they're the only one in town. The power of royalty or other birthright follows a similar pattern - it's not "difficult", but it is scarce -- only a very few people have it, and it's close to impossible for others to get it.
Each individual gets to choose what difficult things they will try to do. Some will have longer or shorter payoffs, but each choice will have some return. And since power (partly) depends on everybody else's choices, neoclassical economics says that individuals' choices collectively determine a single market rate for the return on difficulty. So anything you do that's difficult should have the same payoff.
Anything equally difficult should have equal payoff. Apparently. Clearly, this is not the world we live in. Admittedly, there were some pretty questionable assumptions along the way, but it's almost-kind-of-reasonable to conclude that, if you just generalize from the fictional evidence. (Consider RPGs: They're designed to be balanced. Leveling up any class will get you to advance in power at a more-or-less equal rate.)
So how does reality differ from this fictional evidence? One direction is trivial: it's easy to find examples where what's difficult is not particularly powerful.
Writing a book is hard, and has a respectable payoff (depending on the quality of the book, publicity, etc.). Writing a book without using the letter "e", where the main character speaks only in palindromes, while typing in the dark with only your toes on a computer that's rigged to randomly switch letters around is much much more difficult, but other than perhaps gathering a small but freakishly devoted fanbase, it does not bring any more power/influence than writing any other book. It may be a sign that you are capable of more difficult things, and somebody may notice this and give you power, but this is indirect and unreliable. Similarly, writing a game in machine code or as a set of instructions for a Turing machine is certainly difficult, but also pretty dumb, and has no significant payoff beyond writing the game in a higher-level language. [Edit - thanks to TsviBT: This is assuming there already is a compiler and relevant modules. If you are first to create all of these, there might be quite a lot of benefit.]
On the other hand, some things are powerful, but not particularly difficult. On a purely physical level, this includes operating heavy machinery, or piloting drones. (I'm sure it's not easy, but the power output is immense). Conceptually, I think calculus comes in this category. It can provide a lot of insight into a lot of disparate phenomena (producing utility and its bastard cousin, money), but is not too much work to learn.
As instrumental rationalists, this is the territory we want to be in. We want to beat the market rate for turning effort into influence. So how do we do this?
This is a big, difficult question. I think it's a useful way to frame many of the goals of instrumental rationality. What major should I study? Is this relationship worthwhile? (Note: This may, if poorly applied, turn you into a terrible person. Don't apply it poorly.) What should I do in my spare time?
These questions are tough. But the examples of powerful-but-easy stuff suggest a useful principle: make use of what already exists. Calculus is powerful, but was only easy to learn because I'd already been learning math for a decade. Bulldozers are powerful, and the effort to get this power is minimal if all you have to do is climb in and drive. It's not so worthwhile, though, if you have to derive a design from first principles, mine the ore, invent metallurgy, make all the parts, and secure an oil supply first.
Similarly, if you're already a writer, writing a new book may gain you more influence than learning plumbing. And so on. This begins to suggest that we should not be too hasty to judge past investments as sunk costs. Your starting point matters in trying to find the closest available power boost. And as with any messy real-world problem, luck plays a major role, too.
Of course, there will always be some correlation between power and difficulty -- it's not that the classical economic view is wrong, there's just other factors at play. But to gain influence, you should in general be prepared to do difficult things. However, they should not be arbitrary difficult things -- they should be in areas you have specifically identified as having potential.
To make this more concrete, think of Methods!Harry. He strategically invests a lot of effort, usually at pretty good ratios -- the Gringotts money pump scheme, the True Patronus, his mixing of magic and science, and Partial Transfiguration. Now that's some good fictional evidence.
 Any kind of fiction, but particularly fantasy, sci-fi, and neoclassical economics. All works of elegant beauty, with a more-or-less tenuous relationship to real life.
 Dehghani, M., Sachdeva, S., Ekhtiari, H., Gentner, D., Forbus, F. "The role of Cultural Narratives in Moral Decision Making." Proceedings of the 31th Annual Conference of the Cognitive Science Society. 2009.
I once asked a room full of about 100 neuroscientists whether willpower depletion was a thing, and there was widespread disagreement with the idea. (A propos, this is a great way to quickly gauge consensus in a field.) Basically, for a while some researchers believed that willpower depletion "is" glucose depletion in the prefrontal cortex, but some more recent experiments have failed to replicate this, e.g. by finding that the mere taste of sugar is enough to "replenish" willpower faster than the time it takes blood to move from the mouth to the brain:
Carbohydrate mouth-rinses activate dopaminergic pathways in the striatum–a region of the brain associated with responses to reward (Kringelbach, 2004)–whereas artificially-sweetened non-carbohydrate mouth-rinses do not (Chambers et al., 2009). Thus, the sensing of carbohydrates in the mouth appears to signal the possibility of reward (i.e., the future availability of additional energy), which could motivate rather than fuel physical effort.
-- Molden, D. C. et al, The Motivational versus Metabolic Effects of Carbohydrates on Self-Control. Psychological Science.
Stanford's Carol Dweck and Greg Walden even found that hinting to people that using willpower is energizing might actually make them less depletable:
When we had people read statements that reminded them of the power of willpower like, “Sometimes, working on a strenuous mental task can make you feel energized for further challenging activities,” they kept on working and performing well with no sign of depletion. They made half as many mistakes on a difficult cognitive task as people who read statements about limited willpower. In another study, they scored 15 percent better on I.Q. problems.
-- Dweck and Walden, Willpower: It’s in Your Head? New York Times.
While these are all interesting empirical findings, there’s a very similar phenomenon that’s much less debated and which could explain many of these observations, but I think gets too little popular attention in these discussions:
Willpower is distractible.
Indeed, willpower and working memory are both strongly mediated by the dorsolateral prefontal cortex, so “distraction” could just be the two functions funging against one another. To use the terms of Stanovich popularized by Kahneman in Thinking: Fast and Slow, "System 2" can only override so many "System 1" defaults at any given moment.
So what’s going on when people say "willpower depletion"? I’m not sure, but even if willpower depletion is not a thing, the following distracting phenomena clearly are:
- Physical fatigue (like from running)
- Physical discomfort (like from sitting)
- That specific-other-thing you want to do
- Anxiety about willpower depletion
- Indignation at being asked for too much by bosses, partners, or experimenters...
... and "willpower depletion" might be nothing more than mental distraction by one of these processes. Perhaps it really is better to think of willpower as power (a rate) than energy (a resource).
If that’s true, then figuring out what processes might be distracting us might be much more useful than saying “I’m out of willpower” and giving up. Maybe try having a sip of water or a bit of food if your diet permits it. Maybe try reading lying down to see if you get nap-ish. Maybe set a timer to remind you to call that friend you keep thinking about.
The last two bullets,
- Anxiety about willpower depletion
- Indignation at being asked for too much by bosses, partners, or experimenters...
are also enough to explain why being told willpower depletion isn’t a thing might reduce the effects typically attributed to it: we might simply be less distracted by anxiety or indignation about doing “too much” willpower-intensive work in a short period of time.
Of course, any speculation about how human minds work in general is prone to the "typical mind fallacy". Maybe my willpower is depletable and yours isn’t. But then that wouldn’t explain why you can cause people to exhibit less willpower depletion by suggesting otherwise. But then again, most published research findings are false. But then again the research on the DLPFC and working memory seems relatively old and well established, and distraction is clearly a thing...
All in all, more of my chips are falling on the hypothesis that willpower “depletion” is often just willpower distraction, and that finding and addressing those distractions is probably a better a strategy than avoiding activities altogether in order to "conserve willpower".
Followup to: Ask and Guess
Ask culture: "I'll be in town this weekend for a business trip. Is it cool if I crash at your place?" Response: “Yes“ or “no”.
Guess culture: "Hey, great news! I'll be in town this weekend for a business trip!" Response: Infer that they might be telling you this because they want something from you, conclude that they might want a place to stay, and offer your hospitality only if you want to. Otherwise, pretend you didn’t infer that.
The two basic rules of Ask Culture: 1) Ask when you want something. 2) Interpret things as requests and feel free to say "no".
The two basic rules of Guess Culture: 1) Ask for things if, and *only* if, you're confident the person will say "yes". 2) Interpret requests as expectations of "yes", and, when possible, avoid saying "no".
Both approaches come with costs and benefits. In the end, I feel pretty strongly that Ask is superior.
But these are not the only two possibilities!
"I'll be in town this weekend for a business trip. I would like to stay at your place, since it would save me the cost of a hotel, plus I would enjoy seeing you and expect we’d have some fun. I'm looking for other options, though, and would rather stay elsewhere than inconvenience you." Response: “I think I need some space this weekend. But I’d love to get a beer or something while you’re in town!” or “You should totally stay with me. I’m looking forward to it.”
There is a third alternative, and I think it's probably what rationalist communities ought to strive for. I call it "Tell Culture".
The two basic rules of Tell Culture: 1) Tell the other person what's going on in your own mind whenever you suspect you'd both benefit from them knowing. (Do NOT assume others will accurately model your mind without your help, or that it will even occur to them to ask you questions to eliminate their ignorance.) 2) Interpret things people tell you as attempts to create common knowledge for shared benefit, rather than as requests or as presumptions of compliance.
Suppose you’re in a conversation that you’re finding aversive, and you can’t figure out why. Your goal is to procure a rain check.
- Guess: *You see this annoyed body language? Huh? Look at it! If you don’t stop talking soon I swear I’ll start tapping my foot.* (Or, possibly, tell a little lie to excuse yourself. “Oh, look at the time…”)
- Ask: “Can we talk about this another time?”
- Tell: "I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."
Here are more examples from my own life:
- "I didn't sleep well last night and am feeling frazzled and irritable today. I apologize if I snap at you during this meeting. It isn’t personal."
- "I just realized this interaction will be far more productive if my brain has food. I think we should head toward the kitchen."
- "It would be awfully convenient networking for me to stick around for a bit after our meeting to talk with you and [the next person you're meeting with]. But on a scale of one to ten, it's only about 3 useful to me. If you'd rate the loss of utility for you as two or higher, then I have a strong preference for not sticking around."
The burden of honesty is even greater in Tell culture than in Ask culture. To a Guess culture person, I imagine much of the above sounds passive aggressive or manipulative, much worse than the rude bluntness of mere Ask. It’s because Guess people aren’t expecting relentless truth-telling, which is exactly what’s necessary here.
If you’re occasionally dishonest and tell people you want things you don't actually care about--like their comfort or convenience--they’ll learn not to trust you, and the inherent freedom of the system will be lost. They’ll learn that you only pretend to care about them to take advantage of their reciprocity instincts, when in fact you’ll count them as having defected if they respond by stating a preference for protecting their own interests.
Tell culture is cooperation with open source codes.
This kind of trust does not develop overnight. Here is the most useful Tell tactic I know of for developing that trust with a native Ask or Guess. It’s saved me sooooo much time and trouble, and I wish I’d thought of it earlier.
"I'm not asking because I expect you to say ‘yes’. I'm asking because I'm having trouble imagining the inside of your head, and I want to understand better. You are completely free to say ‘no’, or to tell me what you’re thinking right now, and I promise it will be fine." It is amazing how often people quickly stop looking shifty and say 'no' after this, or better yet begin to discuss further details.
There are things that are worthless-- that provide no value. There are also things that are worse than worthless-- things that provide negative value. I have found that people sometimes confuse the latter for the former, which can carry potentially dire consequences.
One simple example of this is in fencing. I once fenced with an opponent who put a bit of an unnecessary twirl on his blade when recovering from each parry. After our bout, one of the spectators pointed out that there wasn't any point to the twirls and that my opponent would improve by simply not doing them anymore. My opponent claimed that, even if the twirls were unnecessary, at worst they were merely an aesthetic preference that was useless but not actually harmful.
However, the observer explained that any unnecessary movement is harmful in fencing, because it spends time and energy that could be put to better use-- even if that use is just recovering a split second faster! 
During our bout, I indeed scored at least one touch because my opponent's twirling recovery was slower than a less flashy standard movement. That touch could well be the difference between victory and defeat; in a real sword fight, it could be the difference between life and death.
This isn't, of course, to say that everything unnecessary is damaging. There are many things that we can simply be indifferent towards. If I am about to go and fence a bout, the color of the shirt that I wear under my jacket is of no concern to me-- but if I had spent significant time before the bout debating over what shirt to wear instead of training, it would become a damaging detail rather than a meaningless one.
In other words, the real damage is dealt when something is not only unnecessary, but consumes resources that could instead be used for productive tasks. We see this relatively easily when it comes to matters of money, but when it comes to wastes of time and effort, many fail to make the inductive leap.
 Miyamoto Musashi agrees:
The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him. You must thoroughly research this.
[Beliefs about order of magnitude of Bitcoin's future value] --> [Beliefs about Bitcoin's future price] --> [Trading decisions]
Related: The Martial Art of Rationality
One principle in the martial arts is that arts that are practiced with aliveness tend to be more effective.
"Aliveness" in this case refers to a set of training principles focused on simulating conditions in an actual fight as closely as possible in training. Rather than train techniques in a vacuum or against a compliant opponent, alive training focuses on training with movement, timing, and energy under conditions that approximate those where the techniques will actually be used.
A good example of training that isn't alive would be methods that focused entirely on practicing kata and forms without making contact with other practitioners; a good example of training that is alive would be methods that focused on verifying the efficacy of techniques through full-contact engagement with other practitioners.
Aliveness tends to create an environment free from epistemic viciousness-- if your technique doesn't work, you'll know because you won't be able to use it against an opponent. Further, if your technique does work, you'll know that it works because you will have applied it against people trying to prevent you from doing so, and the added confidence will help you better apply that technique when you need it.
Evidence from martial arts competitions indicates that those who practice with aliveness are more effective than others. One of the chief reasons that Brazilian jiu-jitsu (BJJ) practitioners were so successful in early mixed martial arts tournaments was that BJJ-- a martial art that relies primarily on grappling and the use of submission holds and locks to defeat the opponent-- can be trained safely with almost complete aliveness, whereas many other martial arts cannot.
Now, this is not to say that one should only attempt to practice martial arts under completely realistic conditions. For instance, no martial arts school that I am aware of randomly ambushes or attempts to mug its students on the streets outside of class in order to test how they would respond under truly realistic conditions.
Even in the age of sword duels, people would train with blunt weapons and protective armor rather than sharp weapons and ordinary clothes. Would training with sharp weapons and ordinary clothes be more alive than training with blunt weapons and protective armor? Certainly, but the trainees wouldn't be! And yet training with blunt weapons is still useful-- the fact that training does not fully approximate realistic conditions does not intrinsically mean it is bad.
That being said, generally speaking martial arts training that is more alive-- that better approximates realistic fighting conditions-- is more effective within reasonable safety margins. There is a growing consensus among students of martial arts who are looking for effective self-defense techniques that the specific martial art one practices is not hugely relevant, and that what matters more is the extent to which the training does or doesn't use aliveness.
Aliveness and Rationality
So, that's all well and good-- but how can we apply these principles to rationality practice?
While martial arts training has very clear methods of measuring whether or not skills work (can I apply this technique against a resisting opponent?), rationality training is much murkier-- measuring rationality skills is a nontrivial problem.
Further, under normal circumstances the opponent that you are resisting when applying rationality techniques is your own brain, not an external enemy. This makes applying appropriate levels of resistance in training difficult, because it's very easy to cheat yourself. The best method that I have found thus far is lucid dreaming, as forcing your dreaming brain to recognize its true state through the various hallucinations and constructed memories associated with dreaming is no easy task.
That being said, I make no claims to special or unique knowledge in this area. If anyone has suggestions for useful methods of "live" rationality practice, I'd love to hear them.
 For further explanation, see Matt Thornton's classic video "Why Aliveness?"
 If your plan is to choke someone until they fall unconscious, it is possible to safely train for this with nearly complete aliveness by wrestling against an opponent and simply releasing the chokehold before they actually fall unconscious. By contrast, it is much harder to safely train to punch someone into unconsciousness, and harder still to safely train to break people's necks.
 The game of Assassins does do this, but usually follows rules that are constrained enough to make it a suboptimal method of training.
 There are some contexts in which rationality techniques are applied in order to overcome an external enemy. Competitive games and some sports are a good method of finding practice in this respect. For instance, in order to be a competitive Magic: The Gathering player, you need to engage many epistemic and instrumental rationality skills. Competitive poker can offer similar development.
Note: this post is no longer endorsed by the author, for reasons partially described here.
In the spirit of radioing back to describe a path:
The truly absurd thing about dreams lies not with their content, but with the fact that we believe them. Perfectly outrageous and impossible things can occur in dreams without the slightest hesitance to accept them on the part of the dreamer. I have often dreamed myself into bizarre situations that come complete with constructed memories explaining how they secretly make sense!
However, sometimes we break free from these illusions and become aware of the fact that we are dreaming. This is known as lucid dreaming and can be an extremely pleasant experience. Unfortunately, relatively few people experience lucid dreams "naturally;" fortunately, lucid dreaming is also a skill, and like any other skill it can be trained.
While this is all very interesting, you may be wondering what it has to do with rationality. Simply put, I have found lucid dreaming perhaps the best training currently available when it comes to increasing general rationality skills. It is one thing to notice when you are confused by ordinary misunderstandings or tricks; it is another to notice while your own brain is actively constructing memories and environments to fool you!
I've been involved in lucid dreaming for about eight years now and teaching lucid dreaming for two, so I'm pretty familiar with it on a non-surface level. I've also been explicitly looking into the prospect of using lucid dreaming for rationality training purposes since 2010, and I'm fairly confident that it will prove useful for at least some people here.
If you can get yourself to the point where you can consistently induce lucid dreaming by noticing the inconsistencies and absurdities of your dream state, I predict that you will become a much stronger rationalist in the process. If my prediction is correct, lucid dreaming allows you to hone rationality skills while also having fun, and best of all permits you to do this in your sleep!
If this sounds appealing to you, perhaps the most concise and efficient resource for learning lucid dreaming is the book Lucid Dreaming, by Dr. Stephen LaBerge. However, this is a book and costs money. If you're not into that, a somewhat less efficient but much more comprehensive view of lucid dreaming can be found on the website dreamviews.com. I further recommend that anyone interested in this check out the Facebook group Rational Dreamers. Recently founded by LW user BrienneStrohl, this group provides an opportunity to discuss lucid dreaming and related matters in an environment free from some of the mysticism and confusion that otherwise surrounds this issue.
All in all, it seems that lucid dreaming may offer a method of training your rationality in a way that is fun, interesting, and takes essentially none of your waking hours. Thus, if you are interested in increasing your general rationality, I strongly recommend investigating lucid dreaming. To be frank, my main concern about lucid dreaming as a rationality practice is simply that it seems too good to be true.
 Note that this is only one of many ways of inducing lucid dreaming. However, most other techniques that I have tried are not necessarily useful forms of rationality practice, effective as they might be.
 And, to be honest, "fun" is an understatement.
View more: Next