Introduction

The purpose of this essay is to propose an enriched framework of thinking to help optimize the pursuit of agency, the quality of living intentionally. I posit that pursuing and gaining agency involves 3 components:

1. Evaluating reality clearly, to

2. Make effective decisions, that

3. Achieve our short and long-term goals.

In other words, agency refers to the combination of assessing reality accurately and achieving goals effectively, epistemic and instrumental rationality. The essay will first explore the concept of agency more thoroughly, and will then consider the application of this concept in different life domains, by which I mean different life areas such as work, romance, friendships, fitness, leisure, and other domains.

The concepts laid out here sprang from a collaboration between myself and Don Sutterfield, and also discussions with Max Harms, Rita Messer, Carlos Cabrera, Michael Riggs, Ben Thomas, Elissa Fleming, Agnes Vishnevkin, Jeff Dubin, and other members of the Columbus, OH, Rationality Meetup, as well as former members of this Meetup such as Jesse Galef and Erica Edelman. Members of this meetup are also collaborating to organize Intentional Insights, a new nonprofit dedicated to raising the sanity waterline through popularizing Rationality concepts in ways that create cognitive ease for a broad public audience (for more on Intentional Insights, see a fuller description here).

Agency

This section describes a framework of thinking that helps assess reality accurately and achieve goals effectively, in other words gain agency. After all, insofar as human thinking suffers from many biases, working to achieve greater agenty-ness would help us lead better lives. First, I will consider agency in relation to epistemic rationality, and then instrumental rationality: while acknowledging fully that these overlap in some ways, I believe it is helpful to handle them in distinct sections.

This essay proposes that gaining agency from the epistemic perspective involves individuals making an intentional evaluation of their environment and situation, in the moment and more broadly in life, sufficient to understand the full extent of one’s options within it and how these options relate to one’s personal short-term and long-term goals. People often make their decisions, both in the moment and major life decisions, based on socially-prescribed life paths and roles, whether due to the social expectations imposed by others or internalized preconceptions, often a combination of both. Such socially-prescribed life roles limit one’s options and thus the capacity to optimize one’s utility in reaching personal goals and preferences. Instead of going on autopilot in making decisions about one’s options, agency involves intentionally evaluating the full extent of one’s options to pursue the ones most conducive to one’s actual personal goals. To be clear, this may often mean choosing options that are socially prescribed, if they also happen to fit within one’s goal set. This intentional evaluation also means updating one’s beliefs based on evidence and facing the truth of reality even when it may seem ugly.

By gaining agency from the instrumental perspective, this essay refers to the ability to achieve one’s short-term and long-term goals. Doing so requires that one first gain a thorough understanding of one’s short-term and long-term goals, through an intentional process of self-evaluation of one’s values, preferences, and intended life course. Next, it involves learning effective strategies to make and carry out decisions conducive to achieving one’s personal goals and thus win at life. In the moment, that involves having an intentional response to situations, as opposed to relying on autopilot reflexes. This statement certainly does not mean going by System 2 at all times, as doing so would lead to rapid ego depletion, whether through actual willpower drain or through other related mechanisms. Agency involves using System 2 to evaluate System 1 and decide when one’s System 1 may be trusted to make good enough decisions and take appropriate actions with minimal oversight, in other words when System 1 has functional cached thinking, feeling, and behavior patterns. In cases where System 1 habits are problematic, agency involves using System 2 to change System 1 habits into more functional ones conducive to one’s goal set, not only behaviors but also changing one's emotions and thoughts. For the long term, agency involves intentionally making plans about one’s time and activities so that one can accomplish one’s goals. This involves learning about and adopting intentional strategies for discovering, setting, and achieving your goals, and implementing these strategies effectively in your life on a daily level.

Life Domains

Much of the discourse on agency in Rationality circles focuses on this notion as a broad category, and the level of agenty-ness for any individual is treated as a single point on a broad continuum of agency (she’s highly agenty, 8/10; he’s not very agenty, 3/10). After all, if someone has a thorough understanding of the concept of agency as demonstrated by the way they talk about agency and goal achievement, combined with their actual abilities to solve problems and achieve their goals in life domains such as their career or romantic relationships, then that qualifies that individual as a pretty high-level agent, right? Indeed, this is what I and others in the Columbus Rationality Meetup believed in the past about agency.

However, in an insight that now seems obvious to us (hello, hindsight bias) and may seem obvious to you after reading this post, we have come to understand that this is far from the case: in other words, just because someone has a high level of agency and success in one life domain does not mean that they have agency in other domains. Our previous belief that those who understand the concept of agency well and seem highly agenty in one life domain created a dangerous halo effect in evaluating individuals. This halo effect led to highly problematic predictions and normative expectations about the capacities of others, which undermined social relationships through creating misunderstandings, conflicts, and general interpersonal stress. This halo effect also led to highly problematic predictions and normative expectations about ourselves when highly inflated conceptions of our personal capacities in each given life domain contrasted with consequent mistakes in efforts at optimization that resulted in losses of time, energy, motivation, and personal stress.

Since that realization, we have come across studies on the difference between rationality and intelligence, as well as on broader re-evaluations of dual process theory, and also on the difference between task-oriented thinking and socio-relationship thinking, indicating the usefulness of parsing out the heuristic of “smart” and “rational,” and examining the various skills and abilities covered by that term. However, such research has not yet explored how significant skill in rational thinking and agency in one life domain may (or may not) transfer to those same skills and abilities in other areas of life. In other words, individuals may not be intentional and agenty about their application of rational thinking across various life domains, something that might be conveyed through the term “intentionality quotient.” So let me tell you a bit about ourselves as case studies in how the concept of domains of agency has proved to be useful in thinking rationally about our lives and gaining agency more quickly and effectively in varied domains.

For example, I have a high level of agency in my career area and in time management and organization, both knowing quite a lot about these areas and achieving my goals within them pretty well. Moreover, I am thoroughly familiar with the concept of agency, both from the Rationality perspective and from my own academic research. From that, I and others who know me expect me to express high levels of agency across all of my life domains.

However, I have many challenges in being rational about maximizing my utility gains in relationships with others. Only relatively recently, within the last couple of years or so, have I began to consider and pursue intentional efforts to reflect on the value that relationships with others has for my life. These intentional efforts resulted from conversations with members of the Columbus Rationality Meetup about their own approaches to relationships, and reading Less Wrong posts on the topic of relationships. As a result of these efforts, I have begun to deliberately invest resources into cultivating some relationships while withdrawing from others. My System 1 self still has a pretty strong ugh field about doing the latter, and my System 2 has to have a very serious talk with my System 1 every time I make a move to distance myself from extant relationships that no longer serve me well.

This personal example illustrates one major reason why people who have a high level of agency in one life domain may not have it in another life domain. Namely, “ugh” fields and cached thinking patterns prevent many who are quite rational and utility-optimizing in certain domains from applying the same level of intentional analysis to another life domain. For myself, as an introverted bookish child, I had few friends. This was further exacerbated by my family’s immigration to the United States from the former Soviet Union when I was 10, with the consequent deep disruption of interpersonal social development. Thus, my cached beliefs about relationships and my role in them served me poorly in optimizing relationship utility, and only with significant struggle can I apply rational analysis and intentional decision-making to my relationship circles. Still, since starting to apply rationality to my relationships here, I have substantially leveled up my abilities in that domain.

Another major reason why people who have a high level of agency in one life domain may not have it in another life domain results from the fact that people have domain-specific vulnerabilities to specific kinds of biases and cognitive distortions. For example, despite knowing quite a bit about self-control and willpower management, I suffer from challenges managing impulse control over food. I have worked to apply both rational analysis and proven habit management and change strategies to modify my vulnerability to the Kryptonite of food and especially sweets. I know well what I should be doing to exhibit greater agency in that field and have made very slow progress, but the challenges in that domain continually surprise me.

My assessment of my level of agency, which sprang from the areas where I had high agency, caused me to greatly overestimate my ability to optimize in areas where I had low levels of agency, e.g., in relationships and impulse control. As a result, I applied incorrect strategies to level up in those domains, and caused myself a great deal of unnecessary stress, and much loss of time, energy, and motivation.

My realization of the differentiated agency I had across different domains resulted in much more accurate evaluations and optimization strategies. For some domains, such as relationships, the problem resulted primarily from a lack of rational self-reflection. This suggests one major fix to differentiated levels of agency across different life domains – namely, a project that involves rationally evaluating one’s utility optimization in each life area. For some domains, the problem stems from domain-specific vulnerability to certain biases, and that requires applying self-awareness, data gathering, and tolerance toward one’s personally slow optimization in these areas.

My evaluation of the levels of agency of others underwent a similar transformation after the realization that they had different levels of agency in different life domains. Previously, mistaken assessments resulting from the halo effect about agency undermined my social relationships through misunderstandings, conflicts, and general interpersonal stress. For instance, before this realization I found it difficult to understand how one member of the Columbus Rationality Meetup excelled in some life areas, such as managing relationships and social interactions, but suffered from deep challenges in time management and organization. Caring about this individual deeply as a close friend and collaborator, I invested much time and energy resources to help improve this life domain. The painfully slow improvement and many setbacks experienced by this individual caused me to experience much frustration and stress, and resulted in conflicts and tensions between us. However, after making the discovery of differentiated agency across domains, I realized that not only was such frustration misplaced, but that the strategies I was suggesting were targeted too high for this individual, in this domain. A much more accurate assessment of his current capacities and the actual efforts required to level up resulted in much less interpersonal stress and much more effective strategies that helped this individual. Besides myself, other Columbus Rationality Meetup members have experienced similar benefits in applying this paradigm to themselves and to others.

Final Thoughts

To sum up, this essay provided an overview and some strategies for achieving greater agency - a highly instrumental framework of thinking that helps empower individuals to optimize their ability to assess reality accurately and achieve goals effectively. The essay in particular aims to enrich current discourse on agency by highlighting how individuals have different levels of agency across various life domains, and underscoring the epistemic and instrumental implications of this perspective on agency. While the strategies listed above help achieve specific skills and abilities required to gain greater agency, I would suggest that one can benefit greatly from tying positive emotions to the framework of thinking about agency described above. For instance, one might think to one’s self, “It is awesome to take an appropriately fine grained perspective on how agency works, and I’m awesome for dedicating cycles to that project.” Doing so motivates one’s System 1 to pursue increasing levels of agency: it’s the emotionally rational step to assess reality accurately, achieve goals effectively, and thus gain greater agency in all life domains.

 

 

 

 

New to LessWrong?

New Comment
68 comments, sorted by Click to highlight new comments since: Today at 6:04 AM

I feel that the framing of "System 2 fixing System 1" is what leads to the valley of bad rationality. System 1 gives important feedback, has unique skills, and is most of what we are.

Agreed that System 1 gives important feedback, has unique skills, and is most of what we are, and there have been some good ideas expressed on LW about that. I would in addition suggest that System 2 is best set up as a means to evaluate System 1, see where it is doing well and where it needs improvement, and then providing improvement as needed. Thoughts?

A strict hierarchy where System 2 always wins in a direct confrontation because it is better at logical arguments is bad. It is bad because it is going to make the parts of you that have desires but don't know how to express them with rigorous logical arguments feel bad.

If we're using the elephant and rider model, it is trying to aim for a harmonious relationship rather than one where the rider has to poke the elephant with a sharp stick, and has general disdain for it.

I believe my article above did not convey the idea that we should always go by System 2, as that is not a wise move for the reasons I outlined above. I do strongly believe that we should use System 2 thinking to examine the parts of ourselves that have strong desires but don't know how to express them, evaluate whether these desires are beneficial to oneself, and change those parts that are not beneficial, for instance using Dark Arts of Rationality on oneself.

The Elephant and Rider model I used above corresponds to previous discussions on LW about this topic, do you disagree with those discussions?

Sometimes nurses have an intuition that a particular patient might have an issue in the coming day but the nurse has no evidence for the patient getting an issue to make a logical argument.

Research suggests that in those situation it's better to put the patient under extra monitoring then to do nothing.

Fireman who feel unexplainable fear in a building should get out as soon as possible.

In general when there a high cost of ignoring a justified fear but a low cost of following it, it's good to accept the fear even if you don't have a logical reason.

There are also a bunch of cases where the literature suggests that people making decision via unconscious thoughts do as well or better than people who undergo conscious deliberation.

http://www.ncbi.nlm.nih.gov/pubmed/20228284 is for example a study suggests that diagnosis of psychiatric cases works better via unconscious thinking than conscious thinking.

Yup, that makes a lot of sense, agreed on the usefulness of taking in all signals. Intuition can be very useful indeed. I'd also say that intuition would benefit by being occasionally evaluated by System 2 to see if it can be improved so that we can have more effective roles in our daily activities. Your thoughts?

What do you mean when you say "evaluate intuition" in practical terms?

The way I tend to do it is to sit down once a week, and evaluate my current cached habits, thoughts, beliefs, roles, etc - the whole complex of factors that makes up what I perceive as "intuition" for me. I see if they are serving me well, or not. If I find they are not serving me well, I strive to change them.

Thoughts and beliefs are system II stuff. If you do X because you believe Y, that's system II.

Intuition is often when you do things and have no reason for reasons that can't be expressed in language as such it's a lot harder to investigate. The kind of intuition that lets a fireman feel fear when he can't see any logical reason and then quit the building to safe his life, can't be broken down analytically.

I don't see a good reason to speak in terms of system I and system II when speaking about sitting down to retrospect.

I don't think you understand what I'm getting at. You seem to still be positing that System 2 is the "adult" that will discard parts of System 1 when they are not to the advantage of goals System 2 has.

I would in addition suggest that System 2 is best set up as a means to evaluate System 1, see where it is doing well and where it needs improvement

System 2 can be used to improve System 1. Noticing confusion on the other hand is an example where you use system 1 to intervene in a System 2 process.

Good point, agreed on noticing confusion. In my experience, I had to train my System 1 to develop a habit to notice confusion, so first I used System 2 to improve System 1, and then let System 1 intervene in System 2. What was your experience here?

I agree

When I have disagreements with myself or when I'm trying to seize agency for myself, it's as though this intuition bumps into that one while they both rest on the surface of this one, which is actually a mix of two earlier intuitions that aren't quite settled together yet, and they are surrounded by more fluid less well defined intuitions which determine the specifics of their collusion and their movement post-interaction, that sort of thing. It's a very bottom up process, not hierarchical or commanded at all, so it isn't adequately described by appealing to System 2.

This is one reason why there have been discussions about the need to re-evaluate dual process theory and have a more complex understanding of rationality and intelligence. I think the concept of "domains of agency" provides one way of enriching the current conversation, but there are many others as well, such as what you describe about the disagreement with oneself. That might be a good topic to post about in the Less Wrong Discussion thread.

[-][anonymous]9y00

I generally use System 1 in a System 2 like way when I have disagreements with myself or when I'm trying to seize agency. It's easier thinking about it as though it's a physical system. This intuition bumps into that one as they both rest on the surface of this one, which is actually a mix of two earlier intuitions that aren't quite perfectly resolved with one another, and they are surrounded by more fluid intuitions that are less well defined which determine their interactions and their movement after they interact, that sort of thing. It's a very bottom up process, thus I don't feel comfortable attributing it to System 2, but at the same time traditional characterizations of System 1 fail to adequately describe it.

[This comment is no longer endorsed by its author]Reply

Evidence please?

Okay, thanks for the update and of course the idea of measuring agentness, while simultaneously being careful not to apply the halo effect to agentness, is fundamentally sound. I would propose treating the perceived agentness of a certain person as a belief, so that it can be updated quickly with well-known rationalist patterns when the shift moves to another domain.

Let us take the example of a person who is very agenty in managing relationships but bad at time management, as given in your post. In this case, I would observe that this person displays high levels of agentness in managing relationships. However, this does not equate to high agentness in other fields; yet it may be an indication of an overall trend of agentness in his life. Therefore if his relationship agentness level is 10 I might estimate a prior of his agentness at any random domain to be, say, 6.

Now, suppose I observe him scheduling his tasks with a supposed agentness of 6 and he screws it up completely, because of an inherent weakness which I didn't know about in that domain. After the first few times he was late I could lower my belief probability that his agentness in that domain (time management) is actually 6, and increase the probability of the belief that it is 3, for instance, plus a slight increase in the numbers adjacent (2 and 4).

However, cached thoughts do interest me. We have seen clearly that cached thoughts can act against agentness; but in my opinion the correct path is to make cached thoughts for agentness. Say you discover that in situation X, given Y and Z, A is almost always(or a sufficiently high percentage chance) the most agenty option. Then you can use your system 2 to train your system 1 into storing this pattern, and in future situations you will reflex perform A, with the slow-down consideration given depending on how high the chance that the agenty option is not A after all times its disutility and so on.

I would say that cached thoughts are very interesting phenomena, being able to control the first actions of a human being(and the actions that we, being impulsive creatures, normally take first), and that with proper training it might even be possible to use them for good.

I like your suggestion of treating the perceived agenty-ness of a certain person as a belief! Perhaps there can be a scale/scorecard developed to evaluate someone's agenty-ness on different life domains. And we can even give feedback/training to others on their agenty-ness to help them update their beliefs about and skills in certain areas. For example, with your description of someone who is frequently late, that person can be encouraged to develop a specific focus to avoid planning fallacy.

So there might be fine-grained ways of dealing with specific challenges in specific life domains. We at Intentional Insights have actually been thinking of various ways of training people to improve their agency in specific life domains, and having scales/scorecards for specific domains, along with strategies for dealing with that specific domain, might be a useful product. Good suggestion there, thanks!

[-][anonymous]9y20

That's nice. Now what are these goals you think we're trying to achieve, again?

[-][anonymous]9y40

Gosh, but aren't my desires a miserably bad measure of what's actually good for me?

(Yes, being facetious and semi-trolling. Also yes, would like to see someone actually answer the question using real cog-sci knowledge.)

Thoughts on how this paper on intentional systems and this one on agents as intentional systems applies to distinctions between one's desires and what is actually good for oneself?

They don't.

The issue is defining "actually good for oneself" and revealed preferences don't help you here.

[-][anonymous]9y20

You can assume, fairly simply, that the things I retrospectively consciously endorse were actually good for me. The question is how to predict that sort of thing rather than rationalizing or checking it only retrospectively.

Ah, but (to be facetious and semi-trolling for a moment), the narrative fallacy means you can't trust those retrospective endorsements either. Isn't every thought we ever take just self-signalling? Are we not mere microbes in the bowels of Moloch, utterly incapable of real thought or action? Blobs of sentience randomly thrust above the mire of dead matter like a slime mould in its aggregation phase, imagining for a moment that it is a real thing, before collapsing once more into the unthinking ooze!

[-][anonymous]9y30

Ah, there's that good old-fashioned Overcoming-Biasian "rationality", insulting the human mind while making no checkable predictions whatsoever!

You wrote this facetiously, but I regularly find myself updating towards it being quite true.

You wrote this facetiously, but I regularly find myself updating towards it being quite true.

The basilisk lives, and goes forth to destroy the world! My work here is done!

More seriously, I find it easy to build that point of view from the materials of LessWrong, Overcoming Bias, and blogs on rationality, neuroscience, neoreaction, and PUA. If I were inclined to the task I could do it at book length, but it would be the intellectual equivalent of setting a car bomb. So I won't. But it is possible. It is also possible to build completely different stories from the same collection of concepts, as easy as it is to build them from words.

The question that interests me is why people (including myself) are convinced by this story or that. Are they undertaking rational updating in the face of evidence? I provided none, only cherry-picked references to other ideas woven together with hyperbolic metaphors. Do they go along with stories that tell them what they would like to believe already? And yet "microbes in the bowels of Moloch, utterly incapable of real thought or action" is not something anyone would want to be. Perhaps this story appeals because its message, "nothing is true, all is a lie", like its newage opposite, "reality is whatever you want it to be", removes the burden of living in a world where achieving anything worth while is both possible, and a struggle.

[-]CCC9y00

the things I retrospectively consciously endorse were actually good for me.

After how long?

Let us assume that I make a large loan out to someone - call him Jim. Jim promises to pay me back in exactly a year, and I have no reason to doubt him. Two months after taking my money, Jim vanishes, and cannot be found again. The one-year mark passes, and I see no sign of my loan being returned.

At this point, I am likely to regret extending the original loan; I do not retrospectively endorse the action.

One month later, Jim reappears; in apology for repaying my loan late, he repays twice the originally agreed amount.

At this point, I do retrospectively endorse the action of extending the loan.

So, whether or not I retrospectively endorse an action can depend on how long it is since the original action occurred, and can change depending on the observed consequences of the action. How do you tell when to stop, and consciously endorse the action?

That implies that "endorse" means "I conclude that this action left me better off than without it". I don't think this is what most people mean by endorsement. In particular, it fails to consider that some actions can leave you better off or worse off by luck.

If you drive drunk, and you get home safely, does that imply you would endorse having driven drunk that particular time?

[-]CCC9y00

If you drive drunk, and you get home safely, does that imply you would endorse having driven drunk that particular time?

No, it does not; undertaking a high-risk no-reward action is not endorsable simply because the risk is avoided once. You make a good point.

Nonetheless, I have noted that whether I retrospectively endorse an action or not can change as more information is discovered. Hence, the time horizon chosen is important.

I tend to avoid retrospectively endorsing actions based on their outcomes, as that opens up the danger of falling to outcome bias. I instead prefer to orient toward evaluating the process of how I made the decision and took the action, and then trying to improve the process. After all, I can't control the outcome, I can only control the process and my actions, and I believe it is important to only evaluate and endorse those areas that I can control.

[-]CCC9y00

You do make a good point; the advantage of retrospectively endorsing based on outcomes, is that it highlights very clearly where your decision making processes are faulty and provides an incentive to fix said faults before a negative outcome happens again.

But if you're happy with your decision-engine validating processes without that, then it's not necessary.

You can assume, fairly simply, that the things I retrospectively consciously endorse were actually good for me.

I think you're confusing regret or lack of it with "actually good for me". Certainly, the future-you can evaluate the consequences of some action better than the past-you, but he's still only future-you, not an arbiter of what is "actually good" and what is not.

I think there is another issue at play here, namely whether it is worthwhile to evaluate the consequences of decision or actions, or the process of making the decision and taking the action. I believe that improving the process is what is important, not the outcome, as focusing on the outcome often leads to outcome bias. We can only control the process, after all, not the outcome, and it's important to focus on what we have in our locus of control.

[-][anonymous]9y00

There's no confusion here if we use a naturalistic definition of "actually good". If we use a nonnaturalistic definition, then of course the question becomes bloody nonsense. I would hope you'd have the charity not to automatically interpret what my question nonsensically!

There's no confusion here if we use a naturalistic definition of "actually good"

I have no idea what a naturalistic definition of "actually good" would be.

[-][anonymous]9y00

All of that is old philosophy work, not up-to-date cog-sci. It doesn't tell us much at all, since the very definition of irrationality is that your actions can be optimizing for something that's neither what you consciously intended nor what's good for you. The only way to get help from there on this issue would be to believe that humans are perfectly rational, look at what they do, and infer the goals backwards from there!

Ah, if you're looking for newer stuff, Stanovich's work has been really useful for getting some fine-grained distinctions between rationality and intelligence, and together with Evans, Stanovich did some great work on advancing the dual process theory. However, there is much more work to be done, of course. Perhaps others can help out with other useful citations?

Reminder: the downvote button is not a disagreement button. Knock it off, whoever you are.

Thanks, appreciate it!

[-][anonymous]9y20

Unfortunately, it still doesn't answer the question I actually asked. I know damn well I am not a perfectly rational robot. What I'm asking is: what's the cog-sci behind how I can check that my current conscious desires or goals accord with what is actually, objectively, good for me? "You are not a perfect goal-seeking robot" is simply repeating the question in declarative form.

[-]CCC9y10

how I can check that my current conscious desires or goals accord with what is actually, objectively, good for me?

Is this a question that cog-sci can answer?

Let us assume that I decide I really like a new type of food. It may be apples, it may be cream doughnuts. If I ask "is eating this good for me?", then that's more a question for a dietician to answer, instead of a cognitive psychologist, surely?

[-][anonymous]9y20

Assume that you must construct the "actually good" from my actual mind and its actual terminal preferences.

How about introducing the possibility of shifting terminal preferences?

[-]CCC9y00

Am I to be restricted to your current knowledge, and to the deductions you have made from information available to you, or can I introduce principles of, for example, physics or information theory or even dietary science not currently present in your mind?

I do believe that this has answered the other question I asked of you, with regards to after how long to consider the future-you who would be endorsing or not endorsing a given course of action; I understand from this comment that the future-you to choose is the limit future-you at time t as t approaches infinity. This implies that one possible answer to your question would be to imagine yourself at age ninety, and consider what it is that you would most or least appreciate having done at that age. (When I try this, I find that exercise and a healthy diet become very important; I do not wish to be old and frail at ninety. Old may not be avoidable, but frail certainly is...).

[-][anonymous]9y00

Am I to be restricted to your current knowledge, and to the deductions you have made from information available to you, or can I introduce principles of, for example, physics or information theory or even dietary science not currently present in your mind?

You are utterly unlimited in introducing additional knowledge. It just has to be true, is all. Introducing the dietary science on whether I should eat tuna, salmon, hummus, meat, egg, or just plain salad and bread for lunch is entirely allowed, despite my currently going by a heuristic of "tuna sandwiches with veggies on them are really tasty and reasonably healthful."

I understand from this comment that the future-you to choose is the limit future-you at time t as t approaches infinity. This implies that one possible answer to your question would be to imagine yourself at age ninety, and consider what it is that you would most or least appreciate having done at that age. (When I try this, I find that exercise and a healthy diet become very important; I do not wish to be old and frail at ninety. Old may not be avoidable, but frail certainly is...).

This is roughly my line of reasoning as well. What I find interesting is that:

A) People refuse to employ this simple and elegant line of reasoning when figuring out what to do, as if a decision-making criterion must be nonnatural.

B) Actually making the prediction is very hard, and what we practically end up doing is using heuristics that roughly guarantee, "I will not regret this decision too much, unless I gain sufficient additional knowledge to override almost everything I currently know."

Hm, I wonder about orienting toward the 90-year-old self. When I model myself, I at 90 would have liked to know that I lived a life that I consider fulfilling, and that may involve exercise and healthy diet, but also good social connections, and knowledge that I made a positive impact on the world, for example through Intentional Insights. Ideally, I would continue to live beyond 90, though, and that may involve cryonics or maybe even a friendly AI helping us all live forever - go MIRI!

[-][anonymous]9y20

Uhhh... sounds good to me. Well, sounds like the standard LW party-line to me, but it's also all actually good. Sometimes the simple answer is the right one, after all.

[-]CCC9y00

You are utterly unlimited in introducing additional knowledge. It just has to be true, is all.

Hmmm. This makes finding the correct answer very tricky, since in order to be completely correct I have to factor in the entirety of, well, everything that is true.

The best I'd be able to practically manage is heuristics.

People refuse to employ this simple and elegant line of reasoning when figuring out what to do, as if a decision-making criterion must be nonnatural.

Other people are more alien than... well, than most people realise. I often find data to support this hypothesis.

Actually making the prediction is very hard, and what we practically end up doing is using heuristics that roughly guarantee, "I will not regret this decision too much, unless I gain sufficient additional knowledge to override almost everything I currently know."

I don't think it's possible to do better than heuristics, the only question is how good your heuristics are. And your heuristics are dependent on your knowledge; learning more, either through formal education or practical experience, will help to refine those heuristics.

Hmmm... which is a pretty good reason for further education.

I will probably read this post in more detail when the font isn't hurting my sleep-deprived eyes. Please fix!

Thanks for noting that :-) I edited the post, hope that fixed the issue. And get some sleep soon.

How do I apply this insight? It seems potentially useful but I can't think of any ways to have it help me. I now realize that my agency can vary from one domain to the next, what does this imply I should change about my behavior? Or is this solely supposed to help me evaluate others?

One area where it is highly useful for applying the "domains of agency" framework to oneself is to evaluate one's agency in various life domains, and then focus on optimizing those areas that are going to be most helpful for your long-term goals, whatever they are. So for example, I have realized that my level of agency regarding self-regulation of eating is low, and have deliberately invested a lot of time and effort recently into changing my habits in food impulse control. Another area might be managing relationship ugh fields. So knowing specifically what areas you have low agency in can help you focus your energy on optimizing that specific area.

As a result of these efforts, I have begun to deliberately invest resources into cultivating some relationships while withdrawing from others. My System 1 self still has a pretty strong ugh field about doing the latter, and my System 2 has to have a very serious talk with my System 1 every time I make a move to distance myself from extant relationships that no longer serve me well.

There are good reasons for that ugh field. Telling people to cut of relationships for which they see no value is a hallmark of cultism. It's not advise to be given or followed without care.

Why would I spend energy maintaining a relationship that has no apparent [positive] value?

Yup, that's the point I was going for above: having a rational take on relationships, as one among many life domains. After all, why invest resources of time, energy, finances, emotions, and so on into a relationship that overall does not serve one well?

This is an area where I think we disagree. I have a strong probabilistic belief that putting resources into relationships that do not serve me well is not rational, as doing so does not serve my goals for relationships. Let me be clear that this intentional managing of relationships applies to an individual as such, not relationships to a group. Cultism to me, by contrast, implies a strong and charismatic group leader telling those who joined her/his group to cut off relationships with others who did not join that group. So maybe a difference between how we see the world here.

Also, my unstudied impression is that cutting out relationships which are important to the subject is the hallmark of cultism.

Yup, that's my perception of cultism as well. The goal I'd suggest to pursue is to evaluate intentionally which relationships are beneficial/important, and then shift energy and time into focusing on the ones from which one gets the most benefits.

Telling people to cut of relationships for which they see no value is a hallmark of cultism.

Seems like the opposite to me. But perhaps there is an ambiguity in the deixis:

X advises Y to cut off relationships that X derives no value from: bad.

X advises Y to cut off relationships that Y derives no value from: no problem.

And anyway, the quote is more:

X (the OP) cuts off relationships that X derives no value from, with the implication that this is a generally sensible thing to do.

It seems to me that it is.

A cult usually gives their members certain values. The members have those values the get into conflict with individuals with whom they have relationships with don't share those values. Then the cult member is encouraged to cut of those relationships because they hold back the cult member.

The social default is that you don't cut of relations with members of your family even if you don't draw value from those relationships. Groups that do encourage members to cut of their family connection then get seen by other family members as cults.

Members of a cult see other members of the cult has being high value relationships and relationships with outsiders as low value. That leads to group think and not being grounded in society as a whole.

The rationalist who cuts of relationships with everybody who he doesn't consider to be a rationalist falls under this case if you take the outside view.

with the implication that this is a generally sensible thing to do.

It seems to me that it is.

That ignores the point I made. In the outside view it's cultish behavior to cut certain relationships just because they provide you no value. With the inside view you always can find reasons.

I don't want to say that you should never cut relationships but having a reluctance to do so or recommend others to do is good.

I'll just point out that I actively cut off relationships with people of no value before I read this. Therefore, your argument that non-cultists don't cut off relations with zero valu people is incorrect in at least one case and possibly more: as it is the core of your argument, your argument in at least one case and possibly more.

Therefore, your argument that non-cultists don't cut off relations with zero valu people is incorrect in at least one case and possibly more: as it is the core of your argument, your argument in at least one case and possibly more.

Causality 101. A -> B is not the same as B -> A.

I think you're reading more into the OP than is there. Family relationships were not the particular subject; no relationships of any specific sort were the subject. Relationships with "everybody who he doesn't consider to be a rationalist" were not the subject. The last may have been suggested by the context of the Columbus Rationality Meetup, but is there anything more here than "other people persuading each other of things I disagree with"? That does not make a cult.

The social default is that you don't cut of relations with members of your family even if you don't draw value from those relationships.

In some of the more unpleasant parts of the world, perhaps. A better default is "you don't cut off relations with members of your family unless for very strong reasons." Some people do actually have such reasons.

With the inside view you always can find reasons.

I spy invalidation! A telling sign of a cult, undermining the members' ability to trust themselves!

Indeed, family relationships as such were not my subject. My point was relationships in general, and the benefits of being intentional about our relationships of all types, family, friends, and romantic alike.

On a separate note, I very much agree that in some cases, with "very strong" reasons, it is appropriate to cut off relationships with family members. I myself had to cut off a relationship with a very close family member who reacted very suboptimally to my wife's mental health crisis this summer, and put a lot of stress and pressure on her and myself during a time of great stress for the two of us. The "domains of agency" model of thinking helped me make that process of withdrawing from the relationship less painful and more intentional.

I suspect that, while it is a legitimate distinction, dividing these skill-rankings into life domains:

A) Confuses what I feel to be your main (or at least, old) idea of agency, which focuses on the habit of intentionally improving situations, with the domain-specific knowledge required to be successful in improving a situation.

Mostly, I don't like the idea of redefining the word agency to be the product of domain-skills, generic rationality skills, and the habit of using rationality in that domain... because that's the same thing as succeeding in that domain (what we call winning) - well, minus situation effects, anyways. It seems far better to me to use "agency" to refer only to the habitual application of rationality.

You still find that agency is domain specific, but now it is separate from domain skills; give someone who is an agent in a domain some knowledge about the operative principles of that domain, and they start improving their situation; give a non-agent better information and you have the average Lifehacker reader: they read all this advice and don't implement any of it.

B) Isn't near fine-grained enough.

Besides the usual psych 100 stuff about people remembering things better in the same environment they learned them in (How many environments can you think of, now, how many life domains; what's the ratio between those numbers? In the hundreds?), an anecdote which really drove the point home for me:

I have a game I'm familiar with (Echoes), which requires concurrent joystick and mouse input, and I like to train myself to use various messed-up control schemes (for instance, axis inversion). For several days I have to make my movements using a very attention-hungry, slow, deliberate process; over time this process gets faster and less attention hungry, reducing the frequency and severity of slip-ups until I am once again good at the game. I feel the parallels to a rationality practice are obvious.

Relevantly, the preference for the new control scheme then persists for some time... but, for instance, the last one only activated when some deep pattern matching hardware noticed that I had my hand on the joystick AND was playing that game AND was dodging (menus were no problem)... if I withdrew any of those conditions, mouse control was again fluent; but put your hand back on the joystick, and three seconds later...

So, I suppose my point in this subsection is that you cannot safely assume that because you've observed yourself being "agenty" in (say) several relationship situations, you are acting with agency in any particular relationship, topic, time, place, or situation.

(Also, I expect, the above game-learning situation would provide a really good way to screen substances and other interventions for rationality effects, but I haven't done enough experimentation with that to draw any conclusions about the technique or any specific substances.)

Regarding point A:

Based on our experience at Columbus Rationality, I think that having people think specifically about various life domains helps analyze and improve those life areas. My take is that habitual application of rationality is only one aspect of agency as such. I believe tha to know where to apply one's rationality skills to improve the situation, it is vital to have a framework of thinking about specific life domains.

Regarding point B:

I think your criticism is appropriate here, and that's why I presented the article as the start of a research project, not the definitive conclusion. The case study of yourself with the game you brought up is exactly the kind of response that will help build up further case studies and provide fruitful ground for further research that will enable a more fine-grained understanding of various life domains and agency in various ones.