The idea for this post came out of a conversation during one of the Less Wrong Ottawa events. A joke about being solipsist turned into a genuine question–if you wanted to assume that people were figments of your imagination, how much of a problem would this be? (Being told "you would be problematic if I were a solipsist" is a surprising compliment.) 

You can rephrase the question as "do you model people as agents versus complex systems?" or "do you model people as PCs versus NPCs?" (To me these seem like a reframing of the same question, with a different connotation/focus; to other people they might seem like different questions entirely). Almost everyone at the table immediately recognized what we were talking about and agreed that modelling some people as agents and some people as complex systems was a thing they did. However, pretty much everything else varied–how much they modelled people as agents overall, how much it varied in between different people they knew, and how much this impacted the moral value that they assigned to other people. I suspect that another variable is "how much you model yourself as an agent"; this probably varies between people and impacts how they model others. 

What does it mean to model someone as an agent?

The conversation didn't go here in huge amounts of detail, but I expect that due to typical mind fallacy, it's a fascinating discussion to have–that the distinctions that seem clear and self-evident to me probably aren't what other people use at all. I'll explain mine here. 

1. Reliability and responsibility. Agenty people are people I feel I can rely on, who I trust to take heroic responsibility. If I have an unsolved problem and no idea what to do, I can go to them in tears and say "fix this please!" And they will do it. They'll pull out a solution that surprises me and that works. If the first solution doesn't work, they will keep trying. 

In this sense, I model my parents strongly as agents–I have close to 100% confidence that they will do whatever it takes to solve a problem for me. There are other people who I trust to execute a pre-defined solution for me, once I've thought of it, like "could you do me a huge favour and drive me to the bike shop tomorrow at noon?" but whom I wouldn't go to with "AAAAH my bike is broken, help!" There are other people who I wouldn't ask for help, period. Some of them are people I get along with well and like a lot, but they aren't reliable, and they're further down the mental gradient towards NPC. 

The end result of this is that I'm more likely to model people as agents if I know them well and have some kind of relationship where I would expect them to want to help me. Of course, this is incomplete, because there are brilliant, original people who I respect hugely, but who I don't know well, and I wouldn't ask or expect them to solve a problem in my day-to-day life. So this isn't the only factor. 

2. Intellectual formidability. To what extent someone comes up with ideas that surprise me and seem like things I would never have thought of on my own. This also includes people who have accomplished things that I can't imagine myself succeeding at, like startups. In this sense, there are a lot of bloggers, LW posters, and people on the CFAR mailing list who are major PCs in my mental classification system, but who I may not know personally at all. 

3. Conventional "agentiness". The degree to which a person's behaviour can be described by "they wanted X, so they took action Y and got what they wanted", as opposed to "they did X kind of at random, and Y happened." When people seem highly agenty to me, I model their mental processes like this–my brother is one of them. I take the inside view, imagining that I wanted the thing they want and had their characteristics, i.e. relative intelligence, domain-specific expertise, social support, etc, and this gives better predictions than past behaviour. There are other people whose behaviour I predict based on how they've behaved in the past, using the outside view, while barely taking into account what they say they want in the future, and this is what gives useful predictions. 

This category also includes the degree to which people have a growth mindset, which approximates how much they expect themselves to behave in an agenty way. My parents are a good example of people who are totally 100% reliable, but don't expect or want to change their attitudes or beliefs much in the next twenty years.

These three categories probably don't include all the subconscious criteria I use, but they're the main ones I can think of. 

How does this affect relationships with people?

With people who I model as agents, I'm more likely to invoke phrases like "it was your fault that X happened" or "you said you would do Y, why didn't you?" The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents. For people who I consider less agenty, whom I model more as complex systems, I'm more likely to skip the blaming step and jump right to "what are the things that made it hard for you to do Y? Can we fix them?"

On reflection, it seems like the latter is a healthier way to treat myself, and I know this (and consistently fail at doing this). However, I want to be treated like an agent by other people, not a complex system; I want people to give me the benefit of the doubt and assume that I know what I want and am capable of planning to get it. I'm not sure what this means for how I should treat other people. 

How does this affect moral value judgements?

For me, not at all. My default, probably hammered in by years of nursing school, is to treat every human as worthy of dignity and respect. (On a gut level, it doesn't include animals, although it probably should. On an intellectual level, I don't think animals should be mistreated, but animal suffering doesn't upset me on the same visceral level that human suffering does. I think that on a gut level, my "circle of empathy" includes human dead bodies more than it includes animals). 

One of my friends asked me recently if I got frustrated at work, taking care of people who had "brought their illness on themselves", i.e. by smoking, alcohol, drug use, eating junk food for 50 years, or whatever else people usually put in the category of "lifestyle choices." Honestly, I don't; it's not a distinction my brain makes. Some of my patients will recover, go home, and make heroic efforts to stay healthy; others won't, and will turn up back in the ICU at regular intervals. It doesn't affect how I feel about treating them; it feels meaningful either way. The one time I'm liable to get frustrated is when I have to spend hours of hard work on patients who are severely neurologically damaged and are, in a sense, dead already, or at least not people anymore. I hate this. But my default is still to talk to them, keep them looking tidy and comfortable, et cetera...

In that sense, I don't know if modelling different people differently is, for me, a morally a right or a wrong thing to do. However, I spoke to someone whose default is not to assign people moral value, unless he models them as agents. I can see this being problematic, since it's a high standard. 

Conclusion

As usual for when I notice something new about my thinking, I expect to pay a lot of attention to this over the next few weeks, and probably notice some interesting things, and quite possibly change the way I think and behave. I think I've already succeeded in finding the source of some mysterious frustration with my roommate; I want to model her as an agent because of #1–she's my best friend and we've been through a lot together–but in the sense of #3, she's one of the least agenty people I know. So I consistently, predictably get mad at her for things like saying she'll do the dishes and then not doing them, and getting mad doesn't help either of us at all. 

I'm curious to hear what other people think of this idea. 

To what degree do you model people as agents?
New Comment
132 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

In the past year or two, I've spent a lot of time explicitly trying to taboo "agenty" modelling of people from my thoughts. I didn't have a word for it before, and I'm still not sure agenty is the right word, but it's the right idea. One interesting consequence is that I very rarely get angry any more. It just doesn't make sense to be angry when you think of everyone (including yourself) mechanically. Frustration still happens, but it lacks the sense of blame that comes with anger, and it's much easier to control. In fact, I often find others' anger confusing now.

At this point, my efforts to taboo agenty thinking have been successful enough that I misinterpreted the first two paragraphs of this post. I thought it was about the distinction between people I model as full game-theoretic agents (I account for them accounting for my actions) versus people who will execute a fixed script without any reflective reasoning. To me, that's the difference between PCs and NPCs.

More recently, following this same trajectory, I've experimented with tabooing moral value assignments from my thoughts. Whenever I catch myself thinking of what one "should" do, I taboo "should" and replace it with something else. Originally, this amorality-via-taboo was just an experiment, but I was so pleased with it that I kept it around. It really helps you notice what you actually want, and things like "ugh" reactions become more obvious. I highly recommend it, at least as an experiment for a week or two.

[-]Shmi230

Maybe you can write a post detailing your experiences? Sounds quite interesting.

At this point, my efforts to taboo agenty thinking have been successful enough that I misinterpreted the first two paragraphs of this post. I thought it was about the distinction between people I model as full game-theoretic agents (I account for them accounting for my actions) versus people who will execute a fixed script without any reflective reasoning. To me, that's the difference between PCs and NPCs.

This is exactly the kind of other-people-thinking-differently-than-I-do interestingness that caused me to write this post!

The thing that was most interesting to me, on reflection, is that I do get angry less since I've started modelling most people "mechanically". It's jus that my brain doesn't automatically extend that to people whom I respect a lot for whatever reason. For them, I will get angry. Which isn't helpful, but it is informative. I think it might just show that I'm more surprised when people who I think of as PCs let me down, and that when I get angry, it's because I was relying on them and hadn't made fallback plans, so the anger is more just my anxiety about my plans no longer working.

I do get angry less since I've started modelling most people "mechanically". It's jus that my brain doesn't automatically extend that to people whom I respect a lot for whatever reason.

It seems that once you assign specific people to the NPC category you think of them as belonging to a lesser, inferior kind. That's why you get less angry at them and that's why those you respect don't get assigned there.

2Dabor
I've gone through a change much like this over the past couple of years, although not with explicit effort. I would tend to get easily annoyed by crossing inconsequential stupidity or spite somewhere on the internet (not directed at me), and then proceed to be disappointed in myself for having something like that hang on my thoughts for a few hours. Switching to a model in which I'm responsible for my own reaction to other people does a wonder for self control and saves some needless frustration. I can only think of one person (that I know personally) whom I treat as possessing as much agency as I expect of myself, and that results in offering and expecting full honesty. If I view somebody as at all agenty, I generally wouldn't try to spare their feelings or in any way emotionally manipulate them for my own benefit. I don't find that to be a sustainable way to act with strangers: I can't take the time to model why somebody flinging a poorly written insult over a meaningless topic that I happened to skim over is doing so, and I'd gain nothing (and very probably be wrong) in assuming they have a good reason. As was mentioned with assigning non-agents negligible moral value, it does lead to higher standards, but those standards extend to oneself, potentially to one's benefit. Once you make a distinction of what the acts of a non-agent look like, you start more consistently trying to justify everything you say or do yourself. Reminds me a bit of "Would an idiot do that?' And if they would, I do not do that thing." I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don't think having a significantly reduced moral value for others is to my detriment: it just removes the pressure to find a justification for their actions. This will constitute my first comment on Less Wrong, so thank you for the interesting topic, and please inform me of any errors or inconveniences in my writing style.
5hairyfigment
Welcome! Slightly wrong, but as you're still breathing I assume you know this.
6Dabor
I was quoting. It would be more accurate to say that "Would this be done exclusively by idiots?", what with reversed stupidity. Alternatively, if the answer to the default version is yes, that just suggests that you require further consideration. Either way, it's pretty tautological "Would only smart people do this? If not, am I doing it for a smart reason?" but having an extra layer of flags for thinking doesn't hurt.
2pangel
Being in a situation somewhat similar to yours, I've been worrying that my lowered expectations about others' level of agency (with elevated expectations as to what constitutes a "good" level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I'd be more generally prone to take initiative if I saw trust in my peers' eyes.
0johnswentworth
Well posted. I hope we will hear more from you in the future.
0Document
I can't parse this. Is it a reference to something someone else is the thread said?
2Dabor
From the main post.
2Document
Thanks. Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial? Pain and gain motivation seems relevant. Later, you say: Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
1Dabor
I'm not sure if you're asking my moral worth of myself or others, so I'll answer both. If you're referring to my moral worth of myself, I'm assuming that the problem would be that, as I learn about biases, I would consider myself less of an agent, so I wouldn't be motivated to discover my mistakes. You'll have to take my word for it that I pat myself on the back whenever I discover an error in thinking and mark it down, but other than that, I don't have an issue with my self-image being (significantly, long term) tied to how I estimate my efficacy at rationality, one way or another. I just enjoy the process. If you're referring to how I value others, then rationality seems inextricably tied to how I think of others. As I learn about how people get to certain views or actions, I consider them either more or less justified in doing so, and more or less "valuable" than others, if I may speak so bluntly of my fellow man. If I don't think there's a good reason to vandalize someone's property, and I think that there is a good reason to offer food to a homeless man, then if given that isolated knowledge, and a choice from Omega on who I wish to save (assuming that I can't save both), I'll save the person who commits more justified actions. Learning about difficult to lose biases that can lead one to do "bad things" or about misguided notions that can cause people to do right for the wrong reason inevitably changes how I view others (however incrementally), even if I don't offer them agency and see them as "merely" complex machines. Considering that I know that saying I value others is the ideal, and that if I don't believe so, I'd prefer to, it would be difficult to honestly say that I don't value others. I'm not an empathetic person and don't tend to find myself worrying about the future of humanity, but I try to think as if I do for the purpose of moral questions. Seeing as that I value valuing you, and am, from the outside, largely indistinguishable from somebody wh
0wallywalrus
My problem with this is that I want people to be agenty. For me the distinction between agent and complex system is about self-awareness and mindfullness. If you think about yourself and what you are and aren't capable of and how you interact with the world, you will have more agency and be a better person. I'm disgusted by people who just live like thoughtless animals. I guess the obvious solution is to get over it. But I'm not sure I want to. It holds people to a higher standard.
4Lumifer
I think you're confusing being proactive and being a good person. If a homicidal maniac acquires more agency that doesn't make him a better person, it just makes him more dangerous.
0tzok
I think what OP meant was the following. Having two people with the same, positive aims (e.g. be a good parent, do your job well), the agency-driven one will achieve more with the same hard work as the another. Therefore, for people around you, you would wish them to be more agenty as a default.
0Lumifer
That's generally described by words like "effective" and "high-productivity". Why are you assuming that people around me have positive aims? Moreover, what's important is not just aims, but also the costs (and who pays them)

One of my habits while driving is to attempt to model the minds of many of the drivers around me (in situations of light traffic). One result is that when someone does something unexpected, my first reaction is typically "what does he know that I don't?" rather than "what is that idiot doing?". From talking to other drivers, this part of my driving seem abnormal.

In this sense, I model my parents strongly as agents–I have close to 100% confidence that they will do whatever it takes to solve a problem for me.

One of the frequent complaints about the 'agent' concept space, and the "heroic responsibility" concept in particular, is that it rarely seems to take into account people's spheres of responsibility. Are your parents the sort of people who would be able to solve anyone's problem, or are they especially responsible for you? Are other people that seem to be NPCs to you just people that don't care enough about you to spend limited cognitive (and other) resources on you and your problems?

With people who I model as agents, I'm more likely to invoke phrases like "it was your fault that X happened" or "you said you would do Y, why didn'

... (read more)

One of the frequent complaints about the 'agent' concept space, and the "heroic responsibility" concept in particular, is that it rarely seems to take into account people's spheres of responsibility. Are your parents the sort of people who would be able to solve anyone's problem, or are they especially responsible for you? Are other people that seem to be NPCs to you just people that don't care enough about you to spend limited cognitive (and other) resources on you and your problems?

My younger self didn't get this. I remember being surprised and upset that my parents, who would always help me with anything I needed, wouldn't automatically also help me help other people when I asked them. For example, my best friend needed somewhere to stay with her one-year-old, and I was living with my then-boyfriend, who didn't want to share an apartment with a toddler. I was baffled and hurt that my parents didn't want her staying in my old bedroom, even if she paid rent! I'd taken responsibility for helping her, and they had responsibility for helping me, so why not?

Now I know that that's not how most people behave, and that if it was, it might actually be quite dysfunctional.

Do you get more of what you want by blaming people or assigning fault?

I don't think so.

9Vaniver
Agreed. Then it seems particularly dangerous to do that with people you consider especially valuable.

One of the frequent complaints about the 'agent' concept space, and the "heroic responsibility" concept in particular, is that it rarely seems to take into account people's spheres of responsibility. Are your parents the sort of people who would be able to solve anyone's problem, or are they especially responsible for you? Are other people that seem to be NPCs to you just people that don't care enough about you to spend limited cognitive (and other) resources on you and your problems?

I agree with this. I keep being a little puzzled over the frequent use of the "agenty" term at LW, since I haven't really seen any arguments establishing why this would be a useful distinction to make in the first place. At least some of the explanations of the concept seem mostly like cases of correspondence bias (I was going to link an example here, but can't seem to find it anymore).

I keep being a little puzzled over the frequent use of the "agenty" term at LW, since I haven't really seen any arguments establishing why this would be a useful distinction to make in the first place.

Here is my brief impression of what the term "agenty" on LW means:

An "agent" is a person with surplus executive function.

"Executive function" is some combination of planning ability, willpower, and energy (only somewhat related to the concept in psychology). "Surplus" generally means "available to the labeler on the margin." Supposing that people have some relatively fixed replenishing supply of executive function, and relatively fixed consistent drains on executive function, then someone who has surplus executive function today will probably have surplus executive function tomorrow, or next week, or so on. They are likely to be continually starting and finishing side projects.*

The practical usefulness of this term seems obvious: this is someone you can delegate to with mission-type tactics (possibly better known as Auftragstaktik). This ability makes them good people to be friends with. Having this ability yourself both m... (read more)

Okay, now that does sound like a useful term.

Does anyone happen to know of reliable ways for increasing one's supply of executive function, by the way? I seem to run out of it very quickly in general.

Does anyone happen to know of reliable ways for increasing one's supply of executive function, by the way? I seem to run out of it very quickly in general.

There are a handful of specific small fixes that seem to be helpful. For example, having a capture system (which many people are introduced to by Getting Things Done) helps decrease cognitive load, which helps with willpower and energy. Anti-akrasia methods tend to fall into clusters of increasing executive function or decreasing value uncertainty / confusion. A number of people have investigated various drugs (mostly stimulants) that boost some component.

I get the impression that, in general, there are not many low hanging fruit for people to pick, but it is worth putting effort into deliberate upgrades.

After joining the military, where executive function on demand is sort of the meta-goal of most training exercises, i found that having a set wardrobe actually saves a great deal of mental effort. You just don't realize how much time you spend worrying about clothes until you have a book which literally has all the answers and can't be deviated from. I know that this was also a thing that Steve Jobs did- one 'uniform' for life. President Obama apparently does it as well. http://www.forbes.com/sites/jacquelynsmith/2012/10/05/steve-jobs-always-dressed-exactly-the-same-heres-who-else-does/

There are a number of other things i've learned for this which are maybe worth writing up as a separate post. Not sure if that's within the purview of LW though.

9metastable
I agree, though it's always been interesting to me how the tiniest details of clothing become much clearer signals when eveybody's almost the same. Other military practices that I think conserve your energy for what's important: -Daily, routinized exercise. Done in a way that very few people are deciding what comes next. -Maximum use of daylight hours -Minimized high-risk projects outside of workplace (paternalistic health care, insurance, and in many cases, housing and continuing education.)
4KnaveOfAllTrades
It's plausible to me that a much higher proportion of peeps than is generally realized operate substantially better on different sleep schedules to what a 9-5 job forces, in which case enforced maximal (or at least, greater) use of daylight hours is possibly taking place on a societal (global?) level, though not as strongly as in militaries.
5metastable
This is plausible to me, too. I've had very productive friends with very different rhythms. But I suspect far more people believe they operate best staying up late and sleeping late than actually do. There's a reason day shifts frequently outperform night shifts given the same equipment. And we know a lot of people suffer health-wise on night shift.
1Document
I don't think one forced sleep schedule outperforming another is strong evidence that forced schedules are better than natural schedules. Edit: Also, depending on geography, time of year and commute a 9-5 job may force one to get up some time before dawn and/or stay up some time after dark.
2Decius
I also intuit that most people do best on a non-forced sleep schedule; I don't think that many people know how to have a unforced schedule.
3KnaveOfAllTrades
I'd be interested to see this in Discussion. I'm going the opposite way: Paying more attention to non-formulaic outfits, after years of {{varying only within one or two very circumscribed formulas, or even wearing one of exactly the same few set outfits for months--or more--at a time}}. So far it's interesting figuring things out, but it's increasing wardrobe load, and if I continue expanding my collection, it could become substantially more expensive than what I was doing before. The dialectic outside view suggests I'll end up settling down a bit and going back to a more repetitive approach, but with a greater number of variables (e.g. introducing variables for level of formality, weather, audience, tone-fancied-on-given-day, etc.) and items from which to choose.
4pscheyer
As requested. http://lesswrong.com/r/discussion/lw/il7/military_rationalities_and_irrationalities/
3KnaveOfAllTrades
Awesome!
5wedrifid
Stimulants, exercise and the removal of chronic stress.
0Decius
That sounds like ways of reducing the demand, not increasing the supply. "Spending it better" is one option, but not the one that I want.
1wedrifid
They are not. Each of those increase the supply of executive function. * Stimulants. * Exercise. * Chronic stress.
0James_Miller
Lumosity's new game "Train of Thought" might do it.
4Document
Obligatory: http://wondermark.com/638/
4gwern
I sympathize more with that than I would prefer. (Now if you'll excuse me, I need to get back analyzing the effects of day-of-week & hour-of-day on spaced repetition memory recall.)
4Lumifer
Would that be a synonym of "has his shit together" and "gets stuff done"?
3Vaniver
Mostly. I'm trying to make the concept precise and transparent; saying "an agent is a person who gets stuff done" leaves the mechanism by which they get things done opaque, and most of the posts discussing agency seem to have a flavor of "an agent is a person who gets my stuff done" (notably including the possibility that the speaker is not an agent in that sense).
1Document
From the Wikipedia article you link: This before the article actually defines "the concept" except by introducing it as "Auftragstaktik".
8Swimmer963 (Miranda Dixon-Luinenburg)
I don't know if it's a useful way to think, but it's the way I do think in practice, and not necessarily because of reading Less Wrong; I think that's just where I found words for it. And based on the conversation I mentioned, other people also think like this, but using different criteria than mine. Which is really interesting. And after reflecting on this a bit and trying to taboo the term "agenty" and figure out what characteristics my brain is looking at when it assigns that label, I probably will use it less to describe other people. In terms of describing myself, I think it's a good shorthand for several characteristics that I want to have, including being proactive, which is the word I substitute in if I'm talking to someone outside Less Wrong about my efforts at self-improvement.
8Lumifer
For me that depends on what this "unexpected" is. For example, if I see a car in the next lane and ahead of me start to slightly drift into my lane, my reaction is that I know what this idiot is doing -- he is about to switch lanes and he doesn't see me. On the other hand, if a car far ahead hits the brakes and I don't know why -- there my reaction is "he knows something I don't"...
2skepsci
I do the same sort of thinking about the motivations of other drivers, but it seems strange to me to phrase the question as "what does he know that I don't?" More often than not, the cause of strange driving behaviors is lack of knowledge, confusion, or just being an asshole. Some examples of this I saw recently include 1) a guy who immediately cut across two lanes of traffic to get in the exit lane, then just as quickly darted out of it at the beginning of the offramp; 2) A guy on the freeway who slowed to a crawl despite traffic moving quickly all around him; 3) That guy who constantly changes lanes in order to move just slightly faster than the flow of traffic. I'm more likely to ask "what do they know that I don't?" when I see several people ahead of me act in the same way that I can't explain (e.g. many people changing lanes in the same direction).

I propose a theme song for this comment section.

One of my fond memories of high school is being a little snot and posing a math problem to a "dumb kid". I proceeded to think up the wrong answer, and he got the right one (order of operations :D ). This memory is a big roadblock to me modeling other people as different "types" - differences are mostly of degree, not kind. A smart person can do math? Well, a dumb person has math that they can do well. A smart person plans their life? Dumb people make plans too. A dumb person uses bad reasoning? Smart people use bad reasoning.

I consistently, predictably get mad at her for things like saying she'll do the dishes and then not doing them

This doesn't sound like a lack of agentiness. This sounds like a communication problem. Do you think that you're more likely to think of someone as "agenty" if their planning processes are (seemingly) transparent to you (e.g. "this person said they wanted a cookie, then they took actions to get a cookie") vs. non-transparent (e.g. "that person said they wanted a salad, then they took actions to get a cookie")?

The above is a troll fake account (see "Eliezar" instead of "Eliezer", and "Yudkowky" instead of Yudkowsky), please delete and ban him.

#1 grates for me. If a friend goes to me in tears more than a couple of times demanding that I fix their bicycle/grades/relationship/emotional problems, I will no longer consider them a friend. If you ask politely I'll try to get you on the right track (here's the tool you need and here's how to use it/this is how to sign up for tutoring/whatever), but doing much more than that is treating the asker as less than an agent themself. Going to your friend in tears before even trying to come up with a solution yourself is not a good behavior to encourage (I've been on both sides of this, and it's not good for anyone).

Don't confuse reliability and responsibility with being a sucker.

9derefr
There's a specific failure-mode related to this that I'm sure a lot of LW has encountered: for some reason, most people lose 10 "agency points" around their computers. This chart could basically be summarized as "just try being an agent for a minute sheesh." I wonder if there's something about the way people initially encounter computers that biases them against trying to apply their natural level of agency? Maybe, to coin an isomorphism, an "NPC death spiral"? It doesn't quite seem to be learned helplessness, since they still know the problem can be solved, and work toward solving it; they just think solving the problem absolutely requires delegating it to a Real Agent.
5kalium
Many people vastly overestimate the likelihood of results like "computer rendered unbootable" or "all your data is lost forever." (My grandfather won't let anyone else touch the TV or remote because he thinks we could break it by trying to change the channel in the wrong way.) If I thought those were likely results of clicking on random menu items I'd want to delegate too. When I notice myself acting this way around computers, though, the thought process goes something like this: 1. I have a problem, likely because I did something that my social circle would consider stupid. 2. Past attempts to solve computer-related problems myself have a low (30% or so) success rate, so I am likely to have to explain the whole situation to someone who will judge me less intelligent as a result. 3. Any attempts at solving the problem myself will lengthen this explanation and raise the chance that it includes something truly idiotic (this also makes the explanation more stressful, which makes me worse at explaining everything I've done, which makes the problem harder for an expert to solve). 4. Meanwhile, if I succeed it is unimpressive. "Oh, you're 25 and just figured out how to tie your own shoes?" Not exactly an accomplishment I can feel good about. 5. Just ask for help now before I make it any worse (or perhaps read for a while, try one or two methods based not on likelihood of working but on how easy they are to justify under stress, then ask for help).
8Swimmer963 (Miranda Dixon-Luinenburg)
I guess being a PC in that sense sucks. I try not to do this. When I go to my parents in tears, it's because I've tried all the usual solutions and they aren't working and I don't know why, and/or because everything else possible is going wrong at the same time and I don't have the mental energy to deal with my broken bike on top of disasters at work and my best friend having a meltdown. Likewise, being the one who takes heroic responsibility for someone isn't necessarily a healthy role to take, as I've realized.
4Decius
Sometimes heroic responsibility requires metaphorically throwing a guide to fishing at someone. Sometimes it requires the metametaphor (metwophor?) of telling them where the library is that contains that kind of book. And sometimes it requires giving them a literal fish.
7Decius
"Break down and cry" is a failure mode. I'm reminded of the "spending too much time in airports" quote, but that's likely because I've been in airports and aircraft for 15 hours.
7kalium
To clarify, there's a big difference between coming to me in tears asking for help and coming to me in tears asking for a complete solution handed to you on a platter, I've just seen enough of the latter that it really really irritates me. Also, "solves my problem immediately when asked, regardless of whether it's in his interest" seems to me like an attribute of an NPC and not that of a PC.
1MugaSofer
I don't know, it's less annoying then coming to you in tears and then getting annoyed when provided with a solution. (Although that may depend on whether you consider your friends agents ... hmm.)
1kalium
If you have a problem and don't ask for a solution, then I'd try not to be annoyed with you if you're annoyed at being offered one. Maybe you already know exactly what you're going to do but just want to get some complaining in first. Nothing wrong with that.
0Vaniver
I suspect kalium would downgrade someone from friend status if that happened once, which does map on to the annoyance difference.
3MugaSofer
... I have no idea why I didn't realise that. Thank you.

PCs are also systems; they're just systems with a stronger heroic responsibility drive. On the other hand, when you successfully do things and I couldn't predict exactly how you would do them, I have no choice but to model you as an 'intelligence'. But that's, well... really rare.

9Swimmer963 (Miranda Dixon-Luinenburg)
I guess for me it's not incredibly rare that people successfully do things and I can't predict exactly how they would do them. It doesn't seem to be the main distinction that my brain uses to model PC-ness versus NPC-ness, though.
5[anonymous]
I find this comment...very, very fun, and very, very provocative. Are you up for -- in a spirit of fun -- putting it to the test? Like, people could suggest goals that the successful completion of which would potentially label themselves as "an Intelligence" according to Eliezer Yudkowky -- and then you would outline how you would do it? And if you either couldn't predict the answer, or we did it in a way different enough from your predictions (as judged by you!), we'd get bragging rights thereafter? (So for instance, we could put in email sigs, "An intelligence, certified Eliezar Yudkowky." That kind of thing.) A few goals right off the top of my head: * Raise $1000 for MIRI or CFAR * Get a celebrity to rec HPMOR, MIRI or CFAR (the term "celebrity" would require definition) * Convince Eliezer to change his mind on any one topic of significance (as judged by himself) * Solve any "open question" that EY cares to list (again, as judged by himself -- I know that "how to lose weight" is such a question, and presumably there are others) Basically the idea is that we get to posit things we think we know how to do and you don't... and you get to posit things that you don't know how to do but would like to...and then if we "win" we get bragging rights. There's pretty obviously some twisted incentives here (mostly in your favor!) but we'll just have to assume that you're a man of honor. And by "a man of honor" I mean "a man whose reputation is worth enough that he won't casually throw a match." I dunno, does that sound fun to anybody else?
5fowlertm
Do you mean to say that you can generally predict not only what person A will do but precisely how they will do it? Or do you mean that if a person succeeds then you are unsurprised by how they did it, but if they fail or do something crazy you aren't any better than other people at prediction? Either way I would be interested in hearing more about how you do that. Since I've been teaching I've gotten much better at modeling other people-- you might say I've gotten a hefty software patch to my Theory of Mind. Because I mostly interact with children that's what I am calibrated to, but adult's have also gotten much less surprising. I attribute my earlier problems mostly to lack of experience and to simply not trying very hard to model people's motivations or predict their behavior. Further, I've come to realize how important these skills are, and I aspire to reaching Quirrellesque heights of other-modeling. Some potential ways to improve theory of mind: Study the relevant psychology/neuroscience. Learn acting. Carefully read fiction which explores psychology and behavior in an in-depth way (Henry James?) Plays might be even better for this, as you'd presumably have to fill in a lot of the underlying psychology on your own. In conjunction with acting this would probably be even more powerful. You could even go as far as to make bets on what characters will do so as to better calibrate your intuitions. Write fiction which does the same. Placing bets could be extended to real groups of people, though you might not want to let anyone know you were doing this because they might think it's creepy and it could create a kind of anti-induction.
2MugaSofer
That sounds like a very useful sequence.
4Halfwitz
If you regularly associate with people of similar intelligence, how rare can that be? Even if you are the smartest person you know (unlikely considering the people you know, some of whom exceed your competence in mathematics and philosophy), anyone with more XP in certain areas would behave unpredictably in said areas, even if they had a smaller initial endowment. My guess is your means-prediction lobe is badly calibrated because after the fact you say to yourself, “I would have predicted that.” This could be easily tested.

Intelligent people tend to only on rare occasions tackle problems where it stretches the limit of their cognitive abilities to (predict how to) solve them. Thus, most of my exposure to this comes by way of, e.g., watching mathematicians at decision theory workshops prove things in domains where I am unfamiliar - then they can exceed my prediction abilities even when they are not tackling a problem which appears to them spectacularly difficult.

2Decius
Where the task they are doing has a skill requirement that you do not meet, you cannot predict how they will solve the problem. Does that sound right? It's more obvious that the prediction is hard when the decision is "fake-punt, run the clock down, and take the safety instead of giving them the football with so much time left" rather than physical feats. Purely mental feats are a different kind of different.
0[anonymous]
My scepticism depends on how detailed your predictions are, though your fiction/rhetorical abilities likely stem in part form unusually good person-modelling abilities. Do you find yourself regularly and correctly predicting how creative friends will navigate difficult social situations or witty conversations, e.g, guessing punchlines to clever jokes, predicting the coarse of a status game?
0[anonymous]
I may be confused about the "resolution" of your predictions. Suppose you were trying to predict how intelligent person x will seduce intelligent person y. If you said, "X will appeal to Y's vanity and then demonstrate social status." I feel that kind of prediction is pretty trivial. But predicting more exactly how X would do this seems vastly more difficult. How would you rate your abilities in this situation if 1 equals predictions at the resolution of the given example and 10 equals "I could draw you a flow chart which will more-or-less describe the whole of their interaction."
2Vaniver
Relevant: an article that explains the Failed Simulation Effect, by Cal Newport.
0derefr
I note that this suggests that an AI that was as smart as an average human, but also as agenty as an average human, would still seem like a rather dumb computer program (it might be able to solve your problems, but it would suffer akrasia just like you would in doing so.) The cyberpunk ideal of the mobile exoself AI-agent, Getting Things Done for you without supervision, would actually require something far beyond equivalent to an average human to be considered "competent" at its job.
0MugaSofer
Of course, for mere mortals, it's be somewhat less rare ... Woah, I bet that's where the whole "anyone more than a certain amount smarter than me is simply A Smart Person" phenomenon.

The OP here raises a very interesting question, but I can't help but be distracted by the phrasing. Humans are both decision-making agents and complex biochemical systems so my poor pedantic brain is spinning it's wheels trying to turn that into a dichotomy. If it were me I would have said Subject v Object, especially since this ties into objectification, but that's a nitpick too minor for me not to upvote it. Anyway...

Personally I lean towards a "complex systems" model of other humans. People can surprise you, pleasantly or unpleasantly, in how ... (read more)

I model basically everyone I interact with as an agent. This is useful when trying to get help from people who don't want to help you, such as customer service or bureaucrats. By giving the agent agency, it's easy to identify the problem: the agent in question wants to get rid of you with the least amount of effort so they can go back to chatting with their coworkers/browsing the internet/listening to the radio. The solution is generally to make it seem like less effort to get rid of you by helping you with your problem (which is their job after all) than something else. This can be done by simply insisting on being helped, making a ruckus, or asking for a manager, depending on the situation.

This post reminded me of a conversation I was having the other day, where I noted that I commit the planning fallacy far less than average because I rarely even model myself as an agent.

Good article!

One of my hobbyhorses is that you can gain a good deal of insight into someone's political worldview by observing whom they blame versus absolve for bad acts, since blame implies agency and absolution tends to minimize it. Often you find this pattern to be the reverse of stated sympathies. Examples left as an exercise to the reader.

Almost everyone at the table immediately recognized what we were talking about and agreed that modelling some people as agents and some people as complex systems was a thing they did.

Wow. I did not realize that so many other people felt aware of this dichotomy.

So, usually when I'm in a good mood, there isn't any dichotomy. I model everyone in exactly the same way that I model myself - as individuals with certain strengths and weaknesses. You might say that I even model myself as a complex system, to a degree. The model is complete enough that the compl... (read more)

[-][anonymous]50

I aspire to model myself as the only "agent" in the system, kind of like Harry does in HPMOR (with the possible exception of Professor Quirrell). I'm the one whose behavior I can change most directly, so it is unhelpful (at least for me) to model circumstances (which can cause a dangerous victim mentality) or other people as agents. Even if I know I can make an argument to try to change another person's mind, and estimate I have a 50/50 chance of success, it is still me who is making the choice to use Argument A rather than Argument B.

In terms ... (read more)

5scaphandre
I imagine it is probably emotionally taxing and isolating for a human to model themselves as the only true agent in their world. That's a lot of responsibility, inefficient for big projects (where coordinating with other 'proper' agents might be particularly useful) and probably kinda lonely. I am all for personal responsibility and recognise that acting to best improve the world is up to me. I am currently implemented in a great ape – a mammal with certain operating requirements. Part of my behaviour in the world has to include acting to keep that great ape working well. To avoid exposing that silly ape with the emotional weight of the being the only responsible agent in the system and to allow more fun agent-agent interactions, it might make sense to lower the mental bar for those you would call PCs?
0Decius
If I am the only agent in my circle of knowledge, I want to believe so.
2scaphandre
Agreed. But I'd place more value on searching for other agents when I know none. From this thread we can see there is not a fixed concept of what meets the agent criteria. If I knew zero other agents, I'd be more inclined to spend more effort searching or perhaps be a little more flexible with my interpretation of what an agent might be. Of course tricking yourself into solipsism or Wilson worship is a conceivable failure mode, but I don't think it's likely here.

Hmm. I seem to very much, very purely, model myself as an NPC by these definitions. By extension, since I can't use empathic modelling to differentiate like you describe doing, so I model exactly everyone as NPCs. It's also the case that I've never had to model a PC in detail; I know about some people who are, probably including you, but I've never really had the opportunity to interact with such a rare creature for long enough to develop a new way of modelling and seem to be just wining it b just assigning a probability bending magic black box power called "rationality".

I suspect all people, including me, are NPC meat-computers running firmware/software that provides the persistent, conscious illusion of PC-ness (or agenty-ness). Some people are more advanced computers and, therefore, seem more agenty... but all are computers nontheless.

Modeling people this way (as very complex NPCs), as some have pointed out in the comments, seems to be a rather effective means of limiting the experience of anger and frustration... or at least making anger and frustration seem irrational, thereby causing it (at least in my experience) to... (read more)

2derefr
This seems to suggest that modelling people (who may be agents) as non-agents has only positive consequences. I would point out one negative consequence, which I'm sure anyone who has watched some schlock sci-fi is familiar with: you will only believe someone when they tell you you are caught in a time-loop if you already model them as an agent. Substitute anything else sufficiently mind-blowing and urgent, of course. Since only PCs can save the world (nobody else bothers trying, after all), then nobody will believe you are currently carrying the world on your shoulders if they think you're an NPC. This seems dangerous somehow.

Any mind that I can model sufficiently well to be accurate ceases to be an agent at that point.

If I can predict what you are going to do with 100% certainty, then it doesn't matter what internal processes lead you to take that action. I don't need to see into the black box to predict the action of the machine.

People I know well maintain their agenthood by virtue of the fact that they are sufficiently complex to think in ways I do not.

For these reasons, I rarely attempt to model the mental processes of minds I consider to be stronger than mine (in the ra... (read more)

Schelling's Strategy of Conflict says that in some cases, advertising non-agency can be useful, something like "If you cross this threshold, that will trigger punitive retaliation, regardless of cost-benefit, I have no choice in the matter."

4Viliam_Bur
Is there a situation where it would be strategic to live all your life, or large areas of your life in non-agency? Maybe a life in a dictatorship is like this. Be too agenty for someone to notice, and they may decide you are a potential risk, and your genes and memes get eliminated. Later, even if the dictatorship is gone, the habits and the culture remain. Is there a way to compare average citizens' agency in different nations, and correlate that with their history?
0nicdevera
I guess signalling non-agency is tactical level; protective camouflage, poker bluffing etc. Agenty thinking as above is essentially strategic, winning with moves that are creative, devious, hard to predict or counter, going meta, gaming the system. Pretending to be a loyal citizen of Oceania is a good tactic while you covertly work towards other goals. For cultural agency, the Wikipedia page on locus of control's one place to start. And there was the Power Distance Index in Gladwell's Outliers.
3Viliam_Bur
Humans are not very good at pretending. If you pretend something, you start believing it. Especially if you have to pretend it for years. And even if you succeeded in it, it would be very difficult to teach your children -- if they do it wrong, it may result in a death for your whole family, but if you wait until they are reasonable enough, they may already strongly believe other things.

Hm, interesting. I have some terminological confusion to battle through here.

My mind associates "agent" with either Bond/MiB creatures or game theory and economics. The distinction you're drawing I would describe as active and passive. "Agenty"/PC people are the active ones, they make things happen, they shape the narrative, they are internally driven to change their environment. By contrast the "complex-system"/NPC people are the passive ones, they react to events, they go with the flow, the circumstances around them drive th... (read more)

0derefr
A continuum is still a somewhat-unclear metric for agency, since it suggests agency is a static property. I'd suggest modelling a sentience as a colony of basic Agents, each striving toward a particular utility-function primitive. (Pop psychology sometimes calls these "drives" or "instincts.") These basic Agents sometimes work together, like people do, toward common goals; or override one-another for competing goals. Agency, then, is a bit like magnetism--it's a property that arises from your Agent-colony when you've got them all pointing the same way; when "enough of you" wants some particular outcome that there's no confusion about what else you could/should be doing instead. In effect, it allows your collection of basic Agents to be abstracted as a single large Agent with its own clear (though necessarily more complex) goals.
5Lumifer
That's a bit different, I think. You're describing what I'd call the degree of internal conflict (which is only partially visible to the conscious mind, of course). However it seems that agency is very much tied to propensity to act and that is not a function solely of how much in agreement your "drives" are. Crudely speaking, agency is the ability to get your ass off the couch and do stuff. Depressed people, for example, have very low agency and I don't think that's because they have a particularly powerful instinct which says "I want to sit here and mope".

I thought #3 was the definition of "agent", which I suppose is why it got that label. #1 sounds a little like birds confronted by cuckoo parasitism, which Eliezer might call "sphexish" rather than agenty.

I've used "agentness" in at least one LessWrong post to mean the amount of information you need to predict their behavior, given their environment, though I don't think I defined it that way. A person whose actions can always be predicted from existing social conventions, or from the content of the Bible, is not a moral agent. You might call them a moral person, but they've surrendered their agency.

Perhaps I first got this notion of agency from the Foundation Trilogy: Agenthood is the degree to which you mess up Hari Seldon's equations.

My prefere... (read more)

[-]pwno20

The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents.

I've noticed that people are angrier at behaviors they can't explain. The anger subdues when they learn about the motives and circumstances that led to the behavior. If non-agents are suppose to be less predictable, I'd guess we're more inclined to judge/blame them.

Here's my answer to the title question, before reading the post*:

I understand the word "agent" to refer to model I created specifically for modeling humans. The agree to such a degree that any discrepancy is almost entirely due to the ambiguity of these words.

After reading the post: I don't notice myself making the distinction you describe. Under your distinction, the way I model people seems more like treating everyone (including myself) as a complex system than treating everyone (including myself) as an agent, but I'm not sure of this.

*Well, I peeked at the first few sentences.

[-]Shmi20

Upon reflection, I think I consider people whose behavior I have trouble modeling/predicting (roughly those smarter than I am) as PCs and the rest (including myself, unfortunately) as NPCs. However, sometimes I get surprised by NPCs behaving in an agenty way, and sometimes I get surprised by PCs behaving predictably, including predictably wrong.

0Document
Is this a clever "paradoxical" description of what happens that I'm not quite parsing, or is it just a contradiction?
3Decius
The expected action of someone more agenty than oneself, when confronted with certain situations, is to take an action which falls into the category "all others". When they pick "repeat the same course of action which just failed", it is surprising that they picked a predictable response, rather than a response not contained in the predictions.

This seems to me to be a conversation about semantics. Ie.

IF

You and I both view John to have the same:

1) Reliability and responsibility 2) Intellectual formidability 3) Conventional "agentiness"

BUT

You think that intellectual formidability is part of what makes someone "agenty" and I don't.

THEN

We agree about everything that's "real", and only are choosing to define the word "agent" differently.


I anticipate a reasonable chance that something in this conversation just went right over my head, and that it's about som... (read more)

[-][anonymous]00

There's no natural grouping to your examples. Some of them are just people who care about you. Others are people who do things you find impressive.

Frankly, this whole discussion comes across as arrogant and callous. I know we're ostensibly talking about "degree of models" or whatever, but there are clear implicit descriptive claims being made, based on value judgments.

0Swimmer963 (Miranda Dixon-Luinenburg)
I'm aware that my brain may group things in ways that aren't related to useful criteria or criteria I would endorse. My brain was doing this anyway before I wrote the post. Discussing it is an essential part of noticing it and self-modifying or compensating in some way. How, specifically, do you think that having this discussion is arrogant and callous? What would have to be different about it for it not to be arrogant and callous?
1[anonymous]
Sure, I should have been more specific. Here are two questions: 1) How do I model the minds of other people? 2) What are the minds of other people like? My objection is that answers to (1) are being confused with answers to (2). In particular, a reductive (non-agenty) answer to (1) will tend to drift towards a reductive answer to (2). The "arrogance" I see stems from the bias towards using non-reductive models when dealing with behaviors we approve of, and reductive models when dealing with behaviors we don't approve of. For example, consider a devout Mormon, who spends two years traveling in a foreign country on a religious mission. Is this person an agent? Those already sympathetic to Mormon beliefs will be more likely to advance an agent-like explanation of this behavior than someone who doesn't believe in Mormon claims. As another example, is Eliezer an agent? If you share his beliefs about UFAI, you probably think so. But if you think the whole AI/Singularity thing is nonsense, you're more likely to think of Eliezer as just another time-wasting blogger best known for fan-fiction. Why doesn't he get a real job? :P Can a smoker be an agent? We tend to assume any unhealthy behavior has a non-agenty explanation, while healthy behaviors are agenty. We can't imagine the mind of an agent who really doesn't believe or care that smoking is unhealthy. Your roommate doesn't wash the dishes. Have you tried imagining a model of her as an agent, in which she acts in accordance with her own values and decides not to wash the dishes? If she places very little internal value on clean dishes, she may not be able to relate to the mind of an agent who places any value on washed dishes. She may even be modeling you as a non-agent with a quirky response to the stimulus "dirty-dishes". (Did your parents never mistake a value they didn't understand for non-agenty behavior on your part?) People of one political ideology utterly fail to model those with other political ideolog
1Swimmer963 (Miranda Dixon-Luinenburg)
Agreed that you have to be very careful about letting your answers to (1) slide into your answers to (2). But I don't think you c an do th Oh, she likes clean dishes all right. She nagged me about them plenty. It was just that her usual response to dirty dishes was "it's too ughy to go in the kitchen so I just won't cook either." She actually verbalized this to me at some point. She also said (not in so many words) that she would prefer to be the sort of person who just washed dishes and got on with life. So there was more to "what she said" than saying to me that she would wash the dishes (which someone who didn't care about dishes might say anyway for social reasons). Obviously all people are agents to some degree, and can be agents to different degrees on different days depending on, say, tiredness or whether they're around their parents. (I become noticeably less agenty around mine). But these distinctions aren't actually what my brain perceives; my brain latches onto some information that in retrospect is probably relevant, like my roommate saying she wants to be the sort of person who just washes dishes but not washing any dishes, and things that aren't relevant to agentiness, like impressiveness.

I model people constantly, but agency and the "PC vs. NPC" distinction don't even come into it. There are classes of models, but they're more like classes of computational automata: less or more complex, roughly scaling with the scope of my interactions with a person. For instance, it's usually fine to model a grocery store cashier as a nondeterministic finite state machine; handing over groceries and paying are simple enough interactions that an NFSM suffices. Of course the cashier has just as much agency and free will as I do -- but there's a p... (read more)

I was thinking about this recently in the context of the game Diplomacy. One way top play is to model your opponents as rational self interested actors making the optimal move for them at a particular time. This can seperate off your attitudes to people in the game from your normal emotional reactions (e.g. move from "he stabbed me in the back" to "He acted in his self interest as I should have expected").

[An interesting exercise would be to write down each turn what you predict the other players will do and compare that to their actions.]

[-][anonymous]00

There are at best seven people in the world that are actually modelled as agents in my own head. My algorithm for predicting behaviour of an individual generally follows 1) Find out what someone of their social class normally does 2) Assume they will continue to do that plus or minus some hobbies and quirks 3) If they deviate really strangely, check how they have reacted to past crises and whether they have any interests which make them likely to deviate again. If this fails, then begin modelling them to increase prediction accuracy.

This works reasonably ... (read more)

[-][anonymous]00

I'd personally name the ability to change opinions and behaviour as the most important difference between PC and NPC.

[This comment is no longer endorsed by its author]Reply

So, you (Swimmer963) think of agenty people as being those who:

  1. Are reliable
  2. Are skilled (in areas you are less familiar with)
  3. Act deliberately, especially for their own interest

It is interesting that all three of these behaviors seem to be high status behaviors. So, my question is this: does high status make someone seem more agenty to you? Could sufficiently high status be a sufficient condition for someone being "agenty"?

3pwno
Reliable/predictable isn't high status.
-11metatroll
[-][anonymous]00

After I read the question "do you model people as agents versus complex systems?", I started to wonder which of the two options is more "sophisticated". Is an agent more sophisticated than a complex system, or vice versa? I don't really have an opinion here.

Something I like to tell myself is that people are animals first and foremost. Whenever anyone does anything I find strange, unusual, or irrational, my instinct is to speculate about the cause of the behavior. If person A is rude towards person B, I don't think, "person A is bei... (read more)

4Viliam_Bur
Agent is a specific kind of a complex system.
0[anonymous]
I thought we were pretending that the two are mutually exclusive. Agents have magical free will, complex systems don't.
0Viliam_Bur
Okay. I just don't like words defined as "X, except for Y" (specifically: complex systems, except for those who have magical free will). If we tried to avoid this "excepting", the question would be rephrased as: But I am not sure how exactly that helps, so... uhm, end of nitpicking.
2Decius
I model my computer as a complex system; when it has undesired behavior, I give it a known set of conditions and it behaves consistently and often predictably. I don't expect it to engage in goal-oriented behavior. There are people who I model in a similar manner- I know what they do in certain conditions, and I don't ask what it is they are trying to accomplish. There are cases where I behave in a similar manner, performing sphexish behavior even while consciously aware of it. Noticing that I am doing that evokes cognitive dissonance, so I guess I don't actually model myself that way, even when it would be accurate to do so.
0[anonymous]
Huh. I frequently notice myself behaving in a seemingly robotic fashion, doing stuff "automatically" with no real conscious input (e.g. when doing simple, routine tasks like folding laundry), but it doesn't give me any feeling of cognitive dissonance.
4Decius
What about when the behavior you are doing has counterproductive results?
0[anonymous]
What are you asking, exactly? To try to answer your question: If I find myself behaving "automatically" in a counterproductive manner, that's an uncomfortable situation to be in, and to me, it emphasizes the fact that I'm not a "pure goal-oriented agent". I do feel a sort of cognitive dissonance in this cases, I think; I feel like the fact that I'm not behaving productively is "my fault" and it would be easy for me to stop doing what I'm doing, while simultaneously feeling like it would be very difficult to stop doing what I'm doing.
0Decius
Because I described a situation in which I felt a certain way, and you expressed that you felt a different way in a situation which had certain similarities. I felt that I could identify a significant difference between those situations and wanted to confirm that we probably have similar subjective experiences when confronted with similar enough circumstances. Had I discovered a difference, it would be worth further discussion. I'm unsure if this similarity is worth further discussion. Feeling like it would be trivial to do something else, believing that I want to do something else, but not doing something else is a common enough failure mode for me to be worrisome.
0[anonymous]
nod Tangential question: why did you use "failure mode" there instead of "problem"?
0Decius
I haven't codified the exact distinction that I make between those two concepts; in the case of material science, a 'problem' would be a pressure vessel at a low temperature containing high pressure; the failure mode of such a problem would be brittle fracture. In this case it might also have made sense to call it a class of problems; each instance is different enough that a general solution would be different in nature from a series of specific solutions which combined covered every individual exemplified case.
1Document
You assume that when someone appears to be acting in anger, they're actually acting in the way they've decided was best after weighing the facts?
2[anonymous]
Well, no. In the particular case I had in mind, person A was being rude, and so I figured person A was frustrated with person B and believed person B was misbehaving. I asked person A if he thought rudeness was justified in this situation, and he said yes.
1Document
Did he ask himself that question before reacting to person B's behavior?
2[anonymous]
I doubt that he did, so good point.
0Decius
What's the difference between someone who commonly believes that rudeness is appropriate, and a rude person?
3PeterisP
If you model X as "rude person", then you expect him to be rude with a high[er than average] probability cases, period. However, if you model X as an agent that believes that rudeness is appropriate in common situations A,B,C, then you expect that he might behave less rudely (a) if he would percieve that this instance of a common 'rude' situation is nuanced and that rudeness is not appropriate there; or (b) if he could be convinced that rudeness in situations like that is contrary to his goals, whatever those may be. In essence, it's simpler and faster to evaluate expected reactions for people that you model as just complex systems, you can usually do that right away. But if you model goal-oriented behavior, "walk a mile in his shoes" and try to understand the intent of every [non]action and the causes of that, then it tends to be tricky but allows you more depth in both accurate expectations, and ability to affect the behavior. However, if you do it poorly, or simply lack data neccessary to properly understand the reasons/motivations of that person then you'll tend to get gross misunderstandings.
0Document
One has a particular belief, while the other follows a particular pattern of behavior? Not sure I see what you're getting at.
-1[anonymous]
That's not what they said. They said that they believe that rudeness is justified in the situation. That belief could change (or could not) upon further reflection. Hence the concept of regret.
-1Document
Not thinking about a question isn't a belief, or rocks have beliefs.
-1[anonymous]
There's a difference between the slow methodical relatively inefficient (in terms of effort required for a decision) mode of thought, and the instant thoughts we all have (which we use for almost everything we do and are pretty good about many things but not all things).
0Document
Although we've gone from "beliefs" to "thought(s)", it looks like overall we're disputing definitions.