In the past year or two, I've spent a lot of time explicitly trying to taboo "agenty" modelling of people from my thoughts. I didn't have a word for it before, and I'm still not sure agenty is the right word, but it's the right idea. One interesting consequence is that I very rarely get angry any more. It just doesn't make sense to be angry when you think of everyone (including yourself) mechanically. Frustration still happens, but it lacks the sense of blame that comes with anger, and it's much easier to control. In fact, I often find others' anger confusing now.
At this point, my efforts to taboo agenty thinking have been successful enough that I misinterpreted the first two paragraphs of this post. I thought it was about the distinction between people I model as full game-theoretic agents (I account for them accounting for my actions) versus people who will execute a fixed script without any reflective reasoning. To me, that's the difference between PCs and NPCs.
More recently, following this same trajectory, I've experimented with tabooing moral value assignments from my thoughts. Whenever I catch myself thinking of what one "should" do, I taboo "should" and replace it with something else. Originally, this amorality-via-taboo was just an experiment, but I was so pleased with it that I kept it around. It really helps you notice what you actually want, and things like "ugh" reactions become more obvious. I highly recommend it, at least as an experiment for a week or two.
At this point, my efforts to taboo agenty thinking have been successful enough that I misinterpreted the first two paragraphs of this post. I thought it was about the distinction between people I model as full game-theoretic agents (I account for them accounting for my actions) versus people who will execute a fixed script without any reflective reasoning. To me, that's the difference between PCs and NPCs.
This is exactly the kind of other-people-thinking-differently-than-I-do interestingness that caused me to write this post!
The thing that was most interesting to me, on reflection, is that I do get angry less since I've started modelling most people "mechanically". It's jus that my brain doesn't automatically extend that to people whom I respect a lot for whatever reason. For them, I will get angry. Which isn't helpful, but it is informative. I think it might just show that I'm more surprised when people who I think of as PCs let me down, and that when I get angry, it's because I was relying on them and hadn't made fallback plans, so the anger is more just my anxiety about my plans no longer working.
I do get angry less since I've started modelling most people "mechanically". It's jus that my brain doesn't automatically extend that to people whom I respect a lot for whatever reason.
It seems that once you assign specific people to the NPC category you think of them as belonging to a lesser, inferior kind. That's why you get less angry at them and that's why those you respect don't get assigned there.
One of my habits while driving is to attempt to model the minds of many of the drivers around me (in situations of light traffic). One result is that when someone does something unexpected, my first reaction is typically "what does he know that I don't?" rather than "what is that idiot doing?". From talking to other drivers, this part of my driving seem abnormal.
In this sense, I model my parents strongly as agents–I have close to 100% confidence that they will do whatever it takes to solve a problem for me.
One of the frequent complaints about the 'agent' concept space, and the "heroic responsibility" concept in particular, is that it rarely seems to take into account people's spheres of responsibility. Are your parents the sort of people who would be able to solve anyone's problem, or are they especially responsible for you? Are other people that seem to be NPCs to you just people that don't care enough about you to spend limited cognitive (and other) resources on you and your problems?
...With people who I model as agents, I'm more likely to invoke phrases like "it was your fault that X happened" or "you said you would do Y, why didn'
One of the frequent complaints about the 'agent' concept space, and the "heroic responsibility" concept in particular, is that it rarely seems to take into account people's spheres of responsibility. Are your parents the sort of people who would be able to solve anyone's problem, or are they especially responsible for you? Are other people that seem to be NPCs to you just people that don't care enough about you to spend limited cognitive (and other) resources on you and your problems?
My younger self didn't get this. I remember being surprised and upset that my parents, who would always help me with anything I needed, wouldn't automatically also help me help other people when I asked them. For example, my best friend needed somewhere to stay with her one-year-old, and I was living with my then-boyfriend, who didn't want to share an apartment with a toddler. I was baffled and hurt that my parents didn't want her staying in my old bedroom, even if she paid rent! I'd taken responsibility for helping her, and they had responsibility for helping me, so why not?
Now I know that that's not how most people behave, and that if it was, it might actually be quite dysfunctional.
Do you get more of what you want by blaming people or assigning fault?
I don't think so.
One of the frequent complaints about the 'agent' concept space, and the "heroic responsibility" concept in particular, is that it rarely seems to take into account people's spheres of responsibility. Are your parents the sort of people who would be able to solve anyone's problem, or are they especially responsible for you? Are other people that seem to be NPCs to you just people that don't care enough about you to spend limited cognitive (and other) resources on you and your problems?
I agree with this. I keep being a little puzzled over the frequent use of the "agenty" term at LW, since I haven't really seen any arguments establishing why this would be a useful distinction to make in the first place. At least some of the explanations of the concept seem mostly like cases of correspondence bias (I was going to link an example here, but can't seem to find it anymore).
I keep being a little puzzled over the frequent use of the "agenty" term at LW, since I haven't really seen any arguments establishing why this would be a useful distinction to make in the first place.
Here is my brief impression of what the term "agenty" on LW means:
An "agent" is a person with surplus executive function.
"Executive function" is some combination of planning ability, willpower, and energy (only somewhat related to the concept in psychology). "Surplus" generally means "available to the labeler on the margin." Supposing that people have some relatively fixed replenishing supply of executive function, and relatively fixed consistent drains on executive function, then someone who has surplus executive function today will probably have surplus executive function tomorrow, or next week, or so on. They are likely to be continually starting and finishing side projects.*
The practical usefulness of this term seems obvious: this is someone you can delegate to with mission-type tactics (possibly better known as Auftragstaktik). This ability makes them good people to be friends with. Having this ability yourself both m...
Okay, now that does sound like a useful term.
Does anyone happen to know of reliable ways for increasing one's supply of executive function, by the way? I seem to run out of it very quickly in general.
Does anyone happen to know of reliable ways for increasing one's supply of executive function, by the way? I seem to run out of it very quickly in general.
There are a handful of specific small fixes that seem to be helpful. For example, having a capture system (which many people are introduced to by Getting Things Done) helps decrease cognitive load, which helps with willpower and energy. Anti-akrasia methods tend to fall into clusters of increasing executive function or decreasing value uncertainty / confusion. A number of people have investigated various drugs (mostly stimulants) that boost some component.
I get the impression that, in general, there are not many low hanging fruit for people to pick, but it is worth putting effort into deliberate upgrades.
After joining the military, where executive function on demand is sort of the meta-goal of most training exercises, i found that having a set wardrobe actually saves a great deal of mental effort. You just don't realize how much time you spend worrying about clothes until you have a book which literally has all the answers and can't be deviated from. I know that this was also a thing that Steve Jobs did- one 'uniform' for life. President Obama apparently does it as well. http://www.forbes.com/sites/jacquelynsmith/2012/10/05/steve-jobs-always-dressed-exactly-the-same-heres-who-else-does/
There are a number of other things i've learned for this which are maybe worth writing up as a separate post. Not sure if that's within the purview of LW though.
I propose a theme song for this comment section.
One of my fond memories of high school is being a little snot and posing a math problem to a "dumb kid". I proceeded to think up the wrong answer, and he got the right one (order of operations :D ). This memory is a big roadblock to me modeling other people as different "types" - differences are mostly of degree, not kind. A smart person can do math? Well, a dumb person has math that they can do well. A smart person plans their life? Dumb people make plans too. A dumb person uses bad reasoning? Smart people use bad reasoning.
I consistently, predictably get mad at her for things like saying she'll do the dishes and then not doing them
This doesn't sound like a lack of agentiness. This sounds like a communication problem. Do you think that you're more likely to think of someone as "agenty" if their planning processes are (seemingly) transparent to you (e.g. "this person said they wanted a cookie, then they took actions to get a cookie") vs. non-transparent (e.g. "that person said they wanted a salad, then they took actions to get a cookie")?
The above is a troll fake account (see "Eliezar" instead of "Eliezer", and "Yudkowky" instead of Yudkowsky), please delete and ban him.
#1 grates for me. If a friend goes to me in tears more than a couple of times demanding that I fix their bicycle/grades/relationship/emotional problems, I will no longer consider them a friend. If you ask politely I'll try to get you on the right track (here's the tool you need and here's how to use it/this is how to sign up for tutoring/whatever), but doing much more than that is treating the asker as less than an agent themself. Going to your friend in tears before even trying to come up with a solution yourself is not a good behavior to encourage (I've been on both sides of this, and it's not good for anyone).
Don't confuse reliability and responsibility with being a sucker.
PCs are also systems; they're just systems with a stronger heroic responsibility drive. On the other hand, when you successfully do things and I couldn't predict exactly how you would do them, I have no choice but to model you as an 'intelligence'. But that's, well... really rare.
Intelligent people tend to only on rare occasions tackle problems where it stretches the limit of their cognitive abilities to (predict how to) solve them. Thus, most of my exposure to this comes by way of, e.g., watching mathematicians at decision theory workshops prove things in domains where I am unfamiliar - then they can exceed my prediction abilities even when they are not tackling a problem which appears to them spectacularly difficult.
The OP here raises a very interesting question, but I can't help but be distracted by the phrasing. Humans are both decision-making agents and complex biochemical systems so my poor pedantic brain is spinning it's wheels trying to turn that into a dichotomy. If it were me I would have said Subject v Object, especially since this ties into objectification, but that's a nitpick too minor for me not to upvote it. Anyway...
Personally I lean towards a "complex systems" model of other humans. People can surprise you, pleasantly or unpleasantly, in how ...
I model basically everyone I interact with as an agent. This is useful when trying to get help from people who don't want to help you, such as customer service or bureaucrats. By giving the agent agency, it's easy to identify the problem: the agent in question wants to get rid of you with the least amount of effort so they can go back to chatting with their coworkers/browsing the internet/listening to the radio. The solution is generally to make it seem like less effort to get rid of you by helping you with your problem (which is their job after all) than something else. This can be done by simply insisting on being helped, making a ruckus, or asking for a manager, depending on the situation.
This post reminded me of a conversation I was having the other day, where I noted that I commit the planning fallacy far less than average because I rarely even model myself as an agent.
Good article!
One of my hobbyhorses is that you can gain a good deal of insight into someone's political worldview by observing whom they blame versus absolve for bad acts, since blame implies agency and absolution tends to minimize it. Often you find this pattern to be the reverse of stated sympathies. Examples left as an exercise to the reader.
Almost everyone at the table immediately recognized what we were talking about and agreed that modelling some people as agents and some people as complex systems was a thing they did.
Wow. I did not realize that so many other people felt aware of this dichotomy.
So, usually when I'm in a good mood, there isn't any dichotomy. I model everyone in exactly the same way that I model myself - as individuals with certain strengths and weaknesses. You might say that I even model myself as a complex system, to a degree. The model is complete enough that the compl...
I aspire to model myself as the only "agent" in the system, kind of like Harry does in HPMOR (with the possible exception of Professor Quirrell). I'm the one whose behavior I can change most directly, so it is unhelpful (at least for me) to model circumstances (which can cause a dangerous victim mentality) or other people as agents. Even if I know I can make an argument to try to change another person's mind, and estimate I have a 50/50 chance of success, it is still me who is making the choice to use Argument A rather than Argument B.
In terms ...
Hmm. I seem to very much, very purely, model myself as an NPC by these definitions. By extension, since I can't use empathic modelling to differentiate like you describe doing, so I model exactly everyone as NPCs. It's also the case that I've never had to model a PC in detail; I know about some people who are, probably including you, but I've never really had the opportunity to interact with such a rare creature for long enough to develop a new way of modelling and seem to be just wining it b just assigning a probability bending magic black box power called "rationality".
I suspect all people, including me, are NPC meat-computers running firmware/software that provides the persistent, conscious illusion of PC-ness (or agenty-ness). Some people are more advanced computers and, therefore, seem more agenty... but all are computers nontheless.
Modeling people this way (as very complex NPCs), as some have pointed out in the comments, seems to be a rather effective means of limiting the experience of anger and frustration... or at least making anger and frustration seem irrational, thereby causing it (at least in my experience) to...
Any mind that I can model sufficiently well to be accurate ceases to be an agent at that point.
If I can predict what you are going to do with 100% certainty, then it doesn't matter what internal processes lead you to take that action. I don't need to see into the black box to predict the action of the machine.
People I know well maintain their agenthood by virtue of the fact that they are sufficiently complex to think in ways I do not.
For these reasons, I rarely attempt to model the mental processes of minds I consider to be stronger than mine (in the ra...
Schelling's Strategy of Conflict says that in some cases, advertising non-agency can be useful, something like "If you cross this threshold, that will trigger punitive retaliation, regardless of cost-benefit, I have no choice in the matter."
Hm, interesting. I have some terminological confusion to battle through here.
My mind associates "agent" with either Bond/MiB creatures or game theory and economics. The distinction you're drawing I would describe as active and passive. "Agenty"/PC people are the active ones, they make things happen, they shape the narrative, they are internally driven to change their environment. By contrast the "complex-system"/NPC people are the passive ones, they react to events, they go with the flow, the circumstances around them drive th...
I thought #3 was the definition of "agent", which I suppose is why it got that label. #1 sounds a little like birds confronted by cuckoo parasitism, which Eliezer might call "sphexish" rather than agenty.
I've used "agentness" in at least one LessWrong post to mean the amount of information you need to predict their behavior, given their environment, though I don't think I defined it that way. A person whose actions can always be predicted from existing social conventions, or from the content of the Bible, is not a moral agent. You might call them a moral person, but they've surrendered their agency.
Perhaps I first got this notion of agency from the Foundation Trilogy: Agenthood is the degree to which you mess up Hari Seldon's equations.
My prefere...
The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents.
I've noticed that people are angrier at behaviors they can't explain. The anger subdues when they learn about the motives and circumstances that led to the behavior. If non-agents are suppose to be less predictable, I'd guess we're more inclined to judge/blame them.
Here's my answer to the title question, before reading the post*:
I understand the word "agent" to refer to model I created specifically for modeling humans. The agree to such a degree that any discrepancy is almost entirely due to the ambiguity of these words.
After reading the post: I don't notice myself making the distinction you describe. Under your distinction, the way I model people seems more like treating everyone (including myself) as a complex system than treating everyone (including myself) as an agent, but I'm not sure of this.
*Well, I peeked at the first few sentences.
Upon reflection, I think I consider people whose behavior I have trouble modeling/predicting (roughly those smarter than I am) as PCs and the rest (including myself, unfortunately) as NPCs. However, sometimes I get surprised by NPCs behaving in an agenty way, and sometimes I get surprised by PCs behaving predictably, including predictably wrong.
This seems to me to be a conversation about semantics. Ie.
IF
You and I both view John to have the same:
1) Reliability and responsibility 2) Intellectual formidability 3) Conventional "agentiness"
BUT
You think that intellectual formidability is part of what makes someone "agenty" and I don't.
THEN
We agree about everything that's "real", and only are choosing to define the word "agent" differently.
I anticipate a reasonable chance that something in this conversation just went right over my head, and that it's about som...
There's no natural grouping to your examples. Some of them are just people who care about you. Others are people who do things you find impressive.
Frankly, this whole discussion comes across as arrogant and callous. I know we're ostensibly talking about "degree of models" or whatever, but there are clear implicit descriptive claims being made, based on value judgments.
I model people constantly, but agency and the "PC vs. NPC" distinction don't even come into it. There are classes of models, but they're more like classes of computational automata: less or more complex, roughly scaling with the scope of my interactions with a person. For instance, it's usually fine to model a grocery store cashier as a nondeterministic finite state machine; handing over groceries and paying are simple enough interactions that an NFSM suffices. Of course the cashier has just as much agency and free will as I do -- but there's a p...
I was thinking about this recently in the context of the game Diplomacy. One way top play is to model your opponents as rational self interested actors making the optimal move for them at a particular time. This can seperate off your attitudes to people in the game from your normal emotional reactions (e.g. move from "he stabbed me in the back" to "He acted in his self interest as I should have expected").
[An interesting exercise would be to write down each turn what you predict the other players will do and compare that to their actions.]
There are at best seven people in the world that are actually modelled as agents in my own head. My algorithm for predicting behaviour of an individual generally follows 1) Find out what someone of their social class normally does 2) Assume they will continue to do that plus or minus some hobbies and quirks 3) If they deviate really strangely, check how they have reacted to past crises and whether they have any interests which make them likely to deviate again. If this fails, then begin modelling them to increase prediction accuracy.
This works reasonably ...
I'd personally name the ability to change opinions and behaviour as the most important difference between PC and NPC.
So, you (Swimmer963) think of agenty people as being those who:
It is interesting that all three of these behaviors seem to be high status behaviors. So, my question is this: does high status make someone seem more agenty to you? Could sufficiently high status be a sufficient condition for someone being "agenty"?
After I read the question "do you model people as agents versus complex systems?", I started to wonder which of the two options is more "sophisticated". Is an agent more sophisticated than a complex system, or vice versa? I don't really have an opinion here.
Something I like to tell myself is that people are animals first and foremost. Whenever anyone does anything I find strange, unusual, or irrational, my instinct is to speculate about the cause of the behavior. If person A is rude towards person B, I don't think, "person A is bei...
The idea for this post came out of a conversation during one of the Less Wrong Ottawa events. A joke about being solipsist turned into a genuine question–if you wanted to assume that people were figments of your imagination, how much of a problem would this be? (Being told "you would be problematic if I were a solipsist" is a surprising compliment.)
You can rephrase the question as "do you model people as agents versus complex systems?" or "do you model people as PCs versus NPCs?" (To me these seem like a reframing of the same question, with a different connotation/focus; to other people they might seem like different questions entirely). Almost everyone at the table immediately recognized what we were talking about and agreed that modelling some people as agents and some people as complex systems was a thing they did. However, pretty much everything else varied–how much they modelled people as agents overall, how much it varied in between different people they knew, and how much this impacted the moral value that they assigned to other people. I suspect that another variable is "how much you model yourself as an agent"; this probably varies between people and impacts how they model others.
What does it mean to model someone as an agent?
The conversation didn't go here in huge amounts of detail, but I expect that due to typical mind fallacy, it's a fascinating discussion to have–that the distinctions that seem clear and self-evident to me probably aren't what other people use at all. I'll explain mine here.
1. Reliability and responsibility. Agenty people are people I feel I can rely on, who I trust to take heroic responsibility. If I have an unsolved problem and no idea what to do, I can go to them in tears and say "fix this please!" And they will do it. They'll pull out a solution that surprises me and that works. If the first solution doesn't work, they will keep trying.
In this sense, I model my parents strongly as agents–I have close to 100% confidence that they will do whatever it takes to solve a problem for me. There are other people who I trust to execute a pre-defined solution for me, once I've thought of it, like "could you do me a huge favour and drive me to the bike shop tomorrow at noon?" but whom I wouldn't go to with "AAAAH my bike is broken, help!" There are other people who I wouldn't ask for help, period. Some of them are people I get along with well and like a lot, but they aren't reliable, and they're further down the mental gradient towards NPC.
The end result of this is that I'm more likely to model people as agents if I know them well and have some kind of relationship where I would expect them to want to help me. Of course, this is incomplete, because there are brilliant, original people who I respect hugely, but who I don't know well, and I wouldn't ask or expect them to solve a problem in my day-to-day life. So this isn't the only factor.
2. Intellectual formidability. To what extent someone comes up with ideas that surprise me and seem like things I would never have thought of on my own. This also includes people who have accomplished things that I can't imagine myself succeeding at, like startups. In this sense, there are a lot of bloggers, LW posters, and people on the CFAR mailing list who are major PCs in my mental classification system, but who I may not know personally at all.
3. Conventional "agentiness". The degree to which a person's behaviour can be described by "they wanted X, so they took action Y and got what they wanted", as opposed to "they did X kind of at random, and Y happened." When people seem highly agenty to me, I model their mental processes like this–my brother is one of them. I take the inside view, imagining that I wanted the thing they want and had their characteristics, i.e. relative intelligence, domain-specific expertise, social support, etc, and this gives better predictions than past behaviour. There are other people whose behaviour I predict based on how they've behaved in the past, using the outside view, while barely taking into account what they say they want in the future, and this is what gives useful predictions.
This category also includes the degree to which people have a growth mindset, which approximates how much they expect themselves to behave in an agenty way. My parents are a good example of people who are totally 100% reliable, but don't expect or want to change their attitudes or beliefs much in the next twenty years.
These three categories probably don't include all the subconscious criteria I use, but they're the main ones I can think of.
How does this affect relationships with people?
With people who I model as agents, I'm more likely to invoke phrases like "it was your fault that X happened" or "you said you would do Y, why didn't you?" The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents. For people who I consider less agenty, whom I model more as complex systems, I'm more likely to skip the blaming step and jump right to "what are the things that made it hard for you to do Y? Can we fix them?"
On reflection, it seems like the latter is a healthier way to treat myself, and I know this (and consistently fail at doing this). However, I want to be treated like an agent by other people, not a complex system; I want people to give me the benefit of the doubt and assume that I know what I want and am capable of planning to get it. I'm not sure what this means for how I should treat other people.
How does this affect moral value judgements?
For me, not at all. My default, probably hammered in by years of nursing school, is to treat every human as worthy of dignity and respect. (On a gut level, it doesn't include animals, although it probably should. On an intellectual level, I don't think animals should be mistreated, but animal suffering doesn't upset me on the same visceral level that human suffering does. I think that on a gut level, my "circle of empathy" includes human dead bodies more than it includes animals).
One of my friends asked me recently if I got frustrated at work, taking care of people who had "brought their illness on themselves", i.e. by smoking, alcohol, drug use, eating junk food for 50 years, or whatever else people usually put in the category of "lifestyle choices." Honestly, I don't; it's not a distinction my brain makes. Some of my patients will recover, go home, and make heroic efforts to stay healthy; others won't, and will turn up back in the ICU at regular intervals. It doesn't affect how I feel about treating them; it feels meaningful either way. The one time I'm liable to get frustrated is when I have to spend hours of hard work on patients who are severely neurologically damaged and are, in a sense, dead already, or at least not people anymore. I hate this. But my default is still to talk to them, keep them looking tidy and comfortable, et cetera...
In that sense, I don't know if modelling different people differently is, for me, a morally a right or a wrong thing to do. However, I spoke to someone whose default is not to assign people moral value, unless he models them as agents. I can see this being problematic, since it's a high standard.
Conclusion
As usual for when I notice something new about my thinking, I expect to pay a lot of attention to this over the next few weeks, and probably notice some interesting things, and quite possibly change the way I think and behave. I think I've already succeeded in finding the source of some mysterious frustration with my roommate; I want to model her as an agent because of #1–she's my best friend and we've been through a lot together–but in the sense of #3, she's one of the least agenty people I know. So I consistently, predictably get mad at her for things like saying she'll do the dishes and then not doing them, and getting mad doesn't help either of us at all.
I'm curious to hear what other people think of this idea.