The idea for this post came out of a conversation during one of the Less Wrong Ottawa events. A joke about being solipsist turned into a genuine question–if you wanted to assume that people were figments of your imagination, how much of a problem would this be? (Being told "you would be problematic if I were a solipsist" is a surprising compliment.)
You can rephrase the question as "do you model people as agents versus complex systems?" or "do you model people as PCs versus NPCs?" (To me these seem like a reframing of the same question, with a different connotation/focus; to other people they might seem like different questions entirely). Almost everyone at the table immediately recognized what we were talking about and agreed that modelling some people as agents and some people as complex systems was a thing they did. However, pretty much everything else varied–how much they modelled people as agents overall, how much it varied in between different people they knew, and how much this impacted the moral value that they assigned to other people. I suspect that another variable is "how much you model yourself as an agent"; this probably varies between people and impacts how they model others.
What does it mean to model someone as an agent?
The conversation didn't go here in huge amounts of detail, but I expect that due to typical mind fallacy, it's a fascinating discussion to have–that the distinctions that seem clear and self-evident to me probably aren't what other people use at all. I'll explain mine here.
1. Reliability and responsibility. Agenty people are people I feel I can rely on, who I trust to take heroic responsibility. If I have an unsolved problem and no idea what to do, I can go to them in tears and say "fix this please!" And they will do it. They'll pull out a solution that surprises me and that works. If the first solution doesn't work, they will keep trying.
In this sense, I model my parents strongly as agents–I have close to 100% confidence that they will do whatever it takes to solve a problem for me. There are other people who I trust to execute a pre-defined solution for me, once I've thought of it, like "could you do me a huge favour and drive me to the bike shop tomorrow at noon?" but whom I wouldn't go to with "AAAAH my bike is broken, help!" There are other people who I wouldn't ask for help, period. Some of them are people I get along with well and like a lot, but they aren't reliable, and they're further down the mental gradient towards NPC.
The end result of this is that I'm more likely to model people as agents if I know them well and have some kind of relationship where I would expect them to want to help me. Of course, this is incomplete, because there are brilliant, original people who I respect hugely, but who I don't know well, and I wouldn't ask or expect them to solve a problem in my day-to-day life. So this isn't the only factor.
2. Intellectual formidability. To what extent someone comes up with ideas that surprise me and seem like things I would never have thought of on my own. This also includes people who have accomplished things that I can't imagine myself succeeding at, like startups. In this sense, there are a lot of bloggers, LW posters, and people on the CFAR mailing list who are major PCs in my mental classification system, but who I may not know personally at all.
3. Conventional "agentiness". The degree to which a person's behaviour can be described by "they wanted X, so they took action Y and got what they wanted", as opposed to "they did X kind of at random, and Y happened." When people seem highly agenty to me, I model their mental processes like this–my brother is one of them. I take the inside view, imagining that I wanted the thing they want and had their characteristics, i.e. relative intelligence, domain-specific expertise, social support, etc, and this gives better predictions than past behaviour. There are other people whose behaviour I predict based on how they've behaved in the past, using the outside view, while barely taking into account what they say they want in the future, and this is what gives useful predictions.
This category also includes the degree to which people have a growth mindset, which approximates how much they expect themselves to behave in an agenty way. My parents are a good example of people who are totally 100% reliable, but don't expect or want to change their attitudes or beliefs much in the next twenty years.
These three categories probably don't include all the subconscious criteria I use, but they're the main ones I can think of.
How does this affect relationships with people?
With people who I model as agents, I'm more likely to invoke phrases like "it was your fault that X happened" or "you said you would do Y, why didn't you?" The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents. For people who I consider less agenty, whom I model more as complex systems, I'm more likely to skip the blaming step and jump right to "what are the things that made it hard for you to do Y? Can we fix them?"
On reflection, it seems like the latter is a healthier way to treat myself, and I know this (and consistently fail at doing this). However, I want to be treated like an agent by other people, not a complex system; I want people to give me the benefit of the doubt and assume that I know what I want and am capable of planning to get it. I'm not sure what this means for how I should treat other people.
How does this affect moral value judgements?
For me, not at all. My default, probably hammered in by years of nursing school, is to treat every human as worthy of dignity and respect. (On a gut level, it doesn't include animals, although it probably should. On an intellectual level, I don't think animals should be mistreated, but animal suffering doesn't upset me on the same visceral level that human suffering does. I think that on a gut level, my "circle of empathy" includes human dead bodies more than it includes animals).
One of my friends asked me recently if I got frustrated at work, taking care of people who had "brought their illness on themselves", i.e. by smoking, alcohol, drug use, eating junk food for 50 years, or whatever else people usually put in the category of "lifestyle choices." Honestly, I don't; it's not a distinction my brain makes. Some of my patients will recover, go home, and make heroic efforts to stay healthy; others won't, and will turn up back in the ICU at regular intervals. It doesn't affect how I feel about treating them; it feels meaningful either way. The one time I'm liable to get frustrated is when I have to spend hours of hard work on patients who are severely neurologically damaged and are, in a sense, dead already, or at least not people anymore. I hate this. But my default is still to talk to them, keep them looking tidy and comfortable, et cetera...
In that sense, I don't know if modelling different people differently is, for me, a morally a right or a wrong thing to do. However, I spoke to someone whose default is not to assign people moral value, unless he models them as agents. I can see this being problematic, since it's a high standard.
Conclusion
As usual for when I notice something new about my thinking, I expect to pay a lot of attention to this over the next few weeks, and probably notice some interesting things, and quite possibly change the way I think and behave. I think I've already succeeded in finding the source of some mysterious frustration with my roommate; I want to model her as an agent because of #1–she's my best friend and we've been through a lot together–but in the sense of #3, she's one of the least agenty people I know. So I consistently, predictably get mad at her for things like saying she'll do the dishes and then not doing them, and getting mad doesn't help either of us at all.
I'm curious to hear what other people think of this idea.
PCs are also systems; they're just systems with a stronger heroic responsibility drive. On the other hand, when you successfully do things and I couldn't predict exactly how you would do them, I have no choice but to model you as an 'intelligence'. But that's, well... really rare.
Relevant: an article that explains the Failed Simulation Effect, by Cal Newport.