I'm wondering if you are using these terms as synonyms for conformism/non-conformism, or if there is more to being agentic than refusing to conform and looking for your own way?
Also this SSC post seems relevant. Scott calls them "thinkers".
There is much more to being agentic than nonconformity. I apologize for unusual rambliness of this post. I can highlight where I tried to express this:
Returning to the question of willingness to be weird: it is more a prerequisite for agency than the core definition. An agent who is trying to accomplish a goal as strategically as possible, running a new computation, and performing a search for the optimal plan for them - they simply don’t want to be restricted to any existing solutions. If an existing solution is the best, no problem, it’s just that you don’t want to throw out an optimal solution just because it’s unusual.
I would add that it seems like you've focused your entire thought process on the problem of how "rationality" works. And you've also discussed the problem of how you should get rationality done.
I am not sure to what extent you think that I can think of any reasonably useful and desirable parts of rationality which your proposal doesn't actually consider.
+1 good summary. I mean, you can always set a five minute timer if you want to think of more reasonably useful and desirable parts of rationality.
For this to work, you need to have enough time (usually after you have a reasonable amount of experience) with other rationality techniques to learn what you have.
In order for that you must have some amount of background knowledge of how to actually implement your claims, that includes explicit communication, face-to-face communication, explicit communication about the thing that "seems" to match. If you have that amount of knowledge, you can easily have a problem with people being too shy to actually have the kind of conversations they like (e.g. PUA jargon).
And you must be somewhat lacking in social skills.
If you happen to be a little shy, then you'll have a problem with people being overly shy.
I have the impression that people who can find a lot of social skills out of a group can often become very social and are unable to overcome the obstacles they face. (I could be too shy, but I'd really like a "how can you show that you won't be shy without being afraid"?)
In short, people can easily be oblivious to social challenges for longer than they need to overcome them. For example, the first hit with a mouse at a bar is a challenge to overcome. The other person will give a lot of lectures in their bar and some social skills, although the most useful ones are the ones that create the social challenge for the other person.
While I acknowledge this, which I see as good advice, I don't see why it should apply to everyone, or even the most powerful people. If, for instance, some people have social skills that are fairly rare, so that they're not able to overcome their social skills, then that is a factor of a two.
I guess if you wanted to be successful as a social worker in a social setting, that could be more. If you wanted to be successful as a social worker in a social setting then you probably used more social skills than you needed, and that seems to be your excuse.
I think these are just examples. (A) the whole post is about the first time in the past that we lived in a civilization, and B) it's probably easier "thinking" (with respect to people and societies) to "play with the environment, to become the kind of man who can survive the end". (B) the whole post actually sounds like it's going to have an advantage when it starts from "The Art of Super Strategy" and the "What is the art of human rationality" part.
I think it's easier to understand the art of human rationality from its own standpoint. If people have the feeling that rationality is a field that we should have all practicing that is really impressive to us -- and yet they think "hooray!" we give them the feel that rationality is a field that we should have all practicing and looking impressive to us -- why, of course, should anyone want to practice it or are they too dumb to see? (A) as a general statement: "what does rationality teach us to do?" (B) as a type of self indication that a little bit of creativity could be useful.
To illustrate a bit of my own thought process, I might encourage you to check out Eliezer's posts on the rationalist community and Alicorn's posts on agency and all that. So, what I have found so far is that more people are willing to participate than would in my previous post on agency or Alicorn's post (because of your cached self defense). Note that I don't necessarily disagree with A or Alicorn's thinking about agency and would like to avoid that.
Also note that Alicorn's post has the truth sub- danger tag, so if I saw it there, I would also share it with the other commenters. But you have to be aware of the tag's text, and even if you don't, I certainly hope I haven't made things worse. And I hope you don't mind posting it.
Consider the possibility that you're (and many are) conflating multiple distinct things under the term "agency".
1) Moral weight. I'll admit that I used the term "NPC" in my youth, and I regret it now. In fact, everyone has a rich life and their own struggles.
2) Something like "self-actualization", perhaps "growth mindset" or other names for a feeling of empowerment and the belief that one has significant influence over one's future. This is the locus-of-control belief (for the future).
3) Actual exercised influence over one's future. This is the locus-of-control truth (in the past).
4) Useful non-comformity - others' perceptions of unpredictability in desirable dimensions. Simply being weird isn't enough - being successfully weird is necessary.
I'm not sure I agree that "planning" is the key element. I think belief (on the agent's part and in those evaluating agency of others) in locus of control is more important. Planning may make control more effective, but isn't truly necessary to have the control.
I'm not at all sure that these are the same thing. But I do wonder if they're related in the sense that they classify into a cluster in an annoying but strong evolutionary strategy: "ally worth acquiring". Someone powerful enough (or likely to become so) to have an influence on my goals, and at the same time unpredictable enough that I need to spend effort on cultivating the alliance rather than taking it for granted.
Conflating (or even having a strong correlation between) 1 the others is tricky because considering any significant portion of humanity to be "non-agents" is horrific, but putting effort into coordinating with non-agents is stupid. I suspect the right middle ground is to realize that there's a wide band of potential agency, and humans occupy a narrow part of it. What seems like large variance to us is really pretty trivial.
"Makes sense, and humans don't have any other simple agents. We have them out in the wild, we have them out in the wild, we don't have them out in the wild . .
This comes from a post that makes reference to a real life case that doesn't use the word "emotion."
I liked the post. Some random thoughts I had while reading some of your random thoughts
I read this as an argument why black should win the conflict between black and green in the magic color relations. I am more familiar where black is painted as the viallain side and this contrast seemd to be very fruitful in my mind as this seemed like the rare point of view that is pro-black.
-Black does have to worry about punishments of deviants but black also can be quite okay about being in actual conflict. The "error corrections" of deviation punishments can sometimes be justified. At the local level you don't always have the means to appriciate your actions consequences for the wider picture. Green likes empricially found global solutions and it really really dislikes when black individuals have a strong influence in their locality preventing certain types of solutions. Ignoring the athmospheric effects of CO2 allows for easier design of powerful industrial processes and picking up that fruit might be very relevant to personal interests but its not like the restriction (or I guess concern at this stage) is there without cause.
-Black takes actions that have known downsides that it can think it can get away with. The trouble is that sometimes there are downsides they could not have known from their start position and they can get bitten by things they could have known but did not in fact know. Green doesn't have a model why what it does works so it handles unknown dangers equally as well as familiar threats. Curiosity like kills cats (althoguht curiosity isn't selected against atleast not strong enough).
-In the magic metaphor the willingness to take loss is much more severe. It's about willingness to cut your hand off to get at your goal. Framing it as "having high consitution" easily paints a picture where losses can be recovered from. But if you die or you lose your arm you don't get resurrected or regrow a limb. Black is about achieving things that are otherwise impossible but also summoing stuff that would never happen to you otherwise too.
-The flip side of preventing taking others opinins too readily is imposing your will too strongly on others. If you take on a vocabulary that suggests and helps you make a plan of action but also demonises other people it can be easier to be the villain (pretty common trope also that villains act and heroes react). If it is better to rule hell and than serve in heaven is it worth the trouble to turn heaven into hell based solely that your personal situation improves? The whole "aligment problem" is kind of the realisation that an independent mind will have an independent direction which could theorethically be in conflict with other directions. The black stance is that "indidivual will" is a resource to be celebrated and not a problem to be solved away.
I have to be careful what I say about the model that I have in mind in that post. I just want to be clear that I don't think we need this model in order to make a certain kind of assumption.
There are many ideas that seem to hold the idea that "anything in life" (say, human-caused global warming) is universal in this universe (for example, no heat gradient for humans), but many things in evolution can't be universal in this universe (for example, no carbon dating for a human), even if we knew it's universal law.
The model we presented in your post can, in some cases, be more fundamental than the one we actually actually have. But the "common" model, the one that I proposed in your post, just doesn't hold any stronger claim.
I don't think it's a good model to model general conditions of development from which a universal law is in conflict with universal nature.
Epistemic status: Fairly high confidence. Probably not complete, but I do think pieces presented are all part of the true picture.
Agency and being an agent are common terms within the Effective Altruist and Rationalist communities. I have only used heard them used positively, typically to praise someone’s virtue as an agent or to decry the lack of agents and agency.
As a concept, agency is related to planning. Since I’ve been writing about planning of late, I thought I’d attempt a breakdown of agency within my general planning paradigm. I apologize that this write-up is a little rushed.
Examples of Agency and Non-Agency
Keeping the exposition concrete, let’s start with some instances of things I expect to be described as more or less agentic.
Things likely to be described as more agentic:
Things likely to be described as less agentic:
I have not worked hard to craft these lists so I doubt they are properly comprehensive or representative, but they should suffice to get us on the same page.
At times it has been popular, and admittedly controversial, to speak of how some people are PCs (player characters) and others are mere NPCs (non-player characters). PCs (agents) do interesting things and save the day. NPCs (non-agents) followed scripted, boring behaviors like stock and man the village store for the duration of the game. PCs are the heroes, NPCs are not. (It is usually the case that anyone is accomplished or impressive is granted the title of agent.)
The Ingredients of Agency
What causes people in one list to be agentic and those in other to be not so? A ready answer is that people being agentic are willing to be weird. The examples divide nicely along conformity vs nonconformity, doing what everyone else does vs forging your own path.
This is emphatically true - agency requires willingness to be different - but I argue that it is incidental. If you think agency is about being weird, you have missed the point. Though it is not overly apparent from the examples, the core of agency is about accomplishing goals strategically. Foremost, an agent has a goal and is trying to select their actions so as to accomplish that goal.
But in a way, so does everyone. We need a little more detail than this standard definition that you’ve probably heard already. Even if we say that a computer NPC is mindlessly executing their programming, a human shopkeeper legitimately does have their own goals and values towards which their actions contribute. It should be uncontroversial to say that all humans are choosing their actions in a way that digital video game NPCs are not. So what makes the difference between a boring human shopkeeper and Barack Obama?
It is not that one chooses their actions and the other does not at all, but rather the process by which they do so.
First, we must note that planning is really, super-duper, fricking hard. Planning well requires the ability to predict reality well and do some seriously involved computation. Given this, one of the easiest ways to plan is to model your plan off someone else’s. It’s even better if you can model your plan of those executed by dozens, hundreds, or thousands of others. When you choose actions already taken by others, you have access to some really good data about will happen when you take those actions. If I want to go to grad school, there’s a large supply of people I could talk to for advice. By imitating the plans of others, I ensure that I probably won’t get any worse results than they did, plus it’s easier to know which plans are low-variance when lots of people have tried them.
The difference is that agents are usually executing new computation and taking risks with plans that have much higher uncertainty and higher risk associated. The non-agent gets to rely on the fact that many people’s model thought particular actions were a good idea, whereas the agent much more needs to rely on their own models.
Consider the archetypical founders dropping out of college to work on their idea (back before this was a cool, admirable archetype). Most people were following a pathway with a predictably good outcome. Wozniak, Jobs, and Gates probably would have graduated and gotten fine jobs just like people in their reference class. But they instead calculated that a better option for them was to drop out with the attendant risk. This was a course of action that stemmed from them thinking for themselves what would most lead towards their goals and values. Bringing their own models and computation to the situation.
This bumps into another feature of agency: agents who are running their own action-selection computation for themselves rather than imitating others (including their past selves) are able to be a lot more responsive to their individual situation. Plans made my the collective have limited ability to include parameters which customize the plan to the individual.
Returning to the question of willingness to be weird: it is more a prerequisite for agency than the core definition. An agent who is trying to accomplish a goal as strategically as possible and who is running new computation and performing a search for the optimal plan for them - they simply don’t want to be restricted to any existing solutions. If an existing solution is the best, no problem, it’s just that you don’t want to throw out an optimal solution just because it’s unusual.
What other people do is useful data, but to an agent it won’t inherently be a limitation. (Admittedly, you do have to account for how other people will react to your deviance in your plans. More on this soon.)
Mini-summary: an agent tries to accomplish a goal by running relatively more of their own new computation/planning relative to pure imitation of cached plans of others or their past selves; they will not discard plans simply because they are unusual.
Now why be agentic? When you imitate the plans of others, you protect against downside risk and likely won’t get worse than most. On the other hand, you probably won’t get better results either. You cap your expected outcomes within a comfortable range.
I suspect that among the traits which cause people to exhibit the behaviors we consider agentic are:
There has to be something which makes a person want to invest the effort to come up with their own plans rather than marching along the beaten paths with everyone else.
Or maybe not, maybe some people have powerful and active minds so that it’s relatively cheap to them to be thinking fresh for themselves. Maybe in their case, the impetus is boredom.
An agent must believe that more is possible, and more crucially they must believe that it possible for them to cause that more. This corresponds to the locus of control and self-efficacy variables in the core self-evaluations framework.
Further, any agent whose significant work you’re able to see has likely possessed a good measure of conscientiousness. I’m not sure if lazy geniuses might count as an exception. Still, I expect a strong correlation here. Most people who are conscientious are not agents, but those agents you who observe are probably conscientious.
The last few traits could be considered “positive traits” active traits that agents must possess. There are also “negative traits”, traits that most people have and agents must have less of.
Agents strive for more, but the price they pay is a willingness to risk getting even less. If you drop out of college, you make millions of dollars or you might end up broke and without a degree. When you make your own plans, possibly go off the beaten path, there is likelihood of failure. What’s worse, if you fail then you can be blamed for your failure. Pity may be withheld because you could have played it safe and gone along with everyone else, and instead, you decided to be weird.
Across all the different situations, agents might be risking money, home, respect, limb, life, love, career, freedom and all else they value. Not everyone has the constitution for that.
Now, just a bit more needs to be said around agents and the social situation. Above it was implied that the plans are of others are essentially orthogonal to those of an agent. They’re not limited by them. That is true as far as the planning process goes, but as far as enacting one's plans goes, it takes a little more.
An agent doesn’t just risk that their unusual plans might fail in ways more standard plans don’t, they also have to risk they will 1) lose out on approval because they are not doing the standard things, 2) actively be punished for being a deviant with their plans.
If there is status attached to going along certain popular pathways, e.g. working in the right prestigious organizations, then anyone who decides to follow a different plan that only makes sense to them must necessarily forego status they might have otherwise attained. (Perhaps they are gambling that they’ll make more eventually on their own path, but at least at first they are foregoing.) This creates a strong filter that agents are those people who were either indifferent to status or willing to sacrifice it for greater gain.
Ideally it would only be potentially foregone status which would affect agents, instead there is the further element that deviance is often actively punished. It’s the stereotype that the establishment strikes out against the anti-establishment. Everyone group will have its known truths and its taboos. Arrogance and hubris are sins. We are hypocrites who simultaneously praise those who have gone above and beyond while sneering at those who attempt to do the same. Agents must have thick skin.
Indeed, agents must have thick skin and be willing to gamble. In contrast, imitation (which approximates non-agency) serves the multifold function of a) saving computation, b) reducing risk, and c) guarding against social opprobrium and even optimizing for social reward.
Everyday Agents
I fear the above discussion of agency has tended too grandiose, too much towards revolutionaries and founders of billion dollar companies. Really though, we need agency on much more mundane scales too.
Consider that an agentic employee is a supremely useful employee since:
An agentic employee is the kind of employee who doesn’t succumb to defensive decision-making.
Why Agency is Uncommon
The discussion so far can be summarized neatly by saying what is which makes agency uncommon:
Agent/Non-Agent More and Less Agentic
This post is primarily written in terms of agents and non-agents. While convenient, this language is dangerous. I fear that when being an agent is cool, everyone will think themselves is one and go to sleep each night congratulating themselves for being an agent unlike all those bad dumb non-agents.
Better to treat agency is a spectrum upon which you can be scoring higher or lower on any given day.
Addendum: Mysterious Old Wizards
A friend of mine has the hypothesis that a primary way to cause people to be more agentic is to have someone be their mysterious old wizard a la Gandalf, Dumbledore, and Quirrell. A mysterious old wizard shows up, believes in someone, and probably says some mysterious stuff, and this help induces agency.
I can see this working. This might have happened to me a bit, too. If someone shows up and is sufficiently high-status in your mind, and they tell you that you are capable of great things, they can cause all the following:
I can see it working.