You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Why officers vs. enlisted?

13 JoshuaFox 30 October 2013 08:14PM

It's always puzzled me that, in armies, officers form a separate hierarchical ladder from the NCOs and enlisted soldiers.

Armies could have a single hierarchy, top to bottom, as in the simplified diagram below on the left. Instead, all armies have two distinct ladders, with one strictly above the other, as on the right.  (Reminds me of those wacky non-standard integers.)

 

                   

The usual answers are obvious but irrelevant: Yes, some people shoot straight to a position high on the ladder. You could do that with either model. Yes, even when those lower down on the ladder have more experience and wisdom, it can make practical sense to have a hierarchy. Yes, the higher someone is, the higher the level of the decisions they make. You could likewise do these on a one-ladder model.

It's said that officers "decide," while non-officers "just carry out  orders"; or that officers choose strategy, and non-officers do tactics. But everyone makes decisions, on their own level. A private makes decisions for himself, a corporal for three soldiers, and a colonel for a thousand, each one in the context of their orders from above. One soldier's strategy is his superior's tactics. And the distinction is not based on command: New army doctors automatically become officers, even if they don't command anyone. Doctors are non-combatants, but fighter pilots are combatants par excellence, don't command anyone, and are all officers.

These answers don't explain why there need to be two ladders. I asked at Quora without a convincing answer. Historically, the distinction was based on social classes, but that doesn't explain why every army follows this arrangement, including those in very different societies.

Similarly: What's a corporate executive? (I'm talking about large companies here; small companies and startups are different.) I understand that there is a management hierarchy, but why the arbitrary distinction between a senior manager and a junior executive? Aren't those just two rungs on the ladder? In corporate-speak, an executive is called a "decision maker." What a strange term! Isn't a manager or even a lowly "individual contributor" also a decision maker -- at the scope that their own managers allow? (I should add that the two-ladder system is not as developed in business as it is in the army  or in medicine. There is no career ladder for non-execs that extends arbitrarily high, though always below the execs.)

Not all professions work that way. Actuaries have ten levels, based on passing a sequence of exams. And though some areas of engineering distinguish an engineer from a technician, software engineering has no such dichotomy: Some software engineers make more money, and some make broader decisions or manage others, but there is no two-way split.

In medicine, on the other hand, there is a clear distinction between doctors and nurses. There are different status levels among doctors and among nurses, but a PhD in nursing stands on the other side of a clear border from a beginning MD. Similarly with lawyers and paralegals. These dichotomies stem from licensing restrictions, which in turn are descended from medieval guild practices. But why does it have to be this way? Why not just rank medical personnel, or legal personnel, in a single continuum from practical nurse through rockstar brain surgeon. (Is that a title?). There would still be the understanding that some people will never climb beyond a certain point, while others can jump straight to a higher rung.

The answer lies in LessWrong's concept of "agentiness": Making "choices so as to maximize the fulfillment of explicit desires, given explicit beliefs." Less abstractly, it is sometimes described as "reliability and responsibility." Agenty types get to be called "Player Characters" or heroes. ("Agenty" and "agentiness" are made-up words, and the standard terminology is "agent" and "agency.")  I think "agenty" was made up to point out that while all humans are agents to some extent, some do it far better than others.)

In the organizational context, officers and executives are meant to be agenty, while enlisted/NCO and non-executives are not. The officers and executives plan towards achieving goals, while everyone else executes defined tasks. The officers and executives make high-variance decisions, with high risks and high returns, while everyone else has the job of just doing their job consistently and not messing up.

Is agentiness a natural kind, a cluster in thingspace, a joint-carving concept? Might agentiness just be a mix of features that occur to varying degrees in various contexts?

We might say that agentiness is a continuum: Everyone has some, but some people have more than others. Lower-downs sometimes have goals, and higher-ups often act like cogs. Moreover, the agentiness of officers and executives is strictly in the context of their superiors' goals: They may be agenty, but not for their individual goals. It would be more accurate to say that in their roles they are meant to be agenty, on behalf of the organization.

Some people are non-agenty in some of their social roles and agenty in others. For example, I know workers who readily admit to being lowly cogs in a machine, but who have tremendous achievements in setting up and leading non-profits outside work hours. Some hard-driving workaholics are milquetoasts at home. Some caring, wise, foresightful parents are limp rags at work.

But agentiness is a real concept, at least so far as the officers and executives go. Their roles are implicitly defined by agentiness. Armies and corporations decide which people have it (or at least are meant to). These organizations agree with LessWrong that agentiness is a natural kind.

Thoughts on the January CFAR workshop

37 Qiaochu_Yuan 31 January 2013 10:16AM

So, the Center for Applied Rationality just ran another workshop, which Anna kindly invited me to. Below I've written down some thoughts on it, both to organize those thoughts and because it seems other LWers might want to read them. I'll also invite other participants to write down their thoughts in the comments. Apologies if what follows isn't particularly well-organized. 

Feelings and other squishy things

The workshop was totally awesome. This is admittedly not strong evidence that it accomplished its goals (cf. Yvain's comment here), but being around people motivated to improve themselves and the world was totally awesome, and learning with and from them was also totally awesome, and that seems like a good thing. 

Also, the venue was fantastic. CFAR instructors reported that this workshop was more awesome than most, and while I don't want to discount improvements in CFAR's curriculum and its selection process for participants, I think the venue counted for a lot. It was uniformly beautiful and there were a lot of soft things to sit down or take naps on, and I think that helped everybody be more comfortable with and relaxed around each other. 

Main takeaways

Here are some general insights I took away from the workshop. Some of them I had already been aware of on some abstract intellectual level but hadn't fully processed and/or gotten drilled into my head and/or seen the implications of. 

  1. Epistemic rationality doesn't have to be about big things like scientific facts or the existence of God, but can be about much smaller things like the details of how your particular mind works. For example, it's quite valuable to understand what your actual motivations for doing things are. 
  2. Introspection is unreliable. Consequently, you don't have direct access to information like your actual motivations for doing things. However, it's possible to access this information through less direct means. For example, if you believe that your primary motivation for doing X is that it brings about Y, you can perform a thought experiment: imagine a world in which Y has already been brought about. In that world, would you still feel motivated to do X? If so, then there may be reasons other than Y that you do X. 
  3. The mind is embodied. If you consistently model your mind as separate from your body (I have in retrospect been doing this for a long time without explicitly realizing it), you're probably underestimating the powerful influence of your mind on your body and vice versa. For example, dominance of the sympathetic nervous system (which governs the fight-or-flight response) over the parasympathetic nervous system is unpleasant, unhealthy, and can prevent you from explicitly modeling other people. If you can notice and control it, you'll probably be happier, and if you get really good, you can develop aikido-related superpowers
  4. You are a social animal. Just as your mind should be modeled as a part of your body, you should be modeled as a part of human society. For example, if you don't think you care about social approval, you are probably wrong, and thinking that will cause you to have incorrect beliefs about things like your actual motivations for doing things. 
  5. Emotions are data. Your emotional responses to stimuli give you information about what's going on in your mind that you can use. For example, if you learn that a certain stimulus reliably makes you angry and you don't want to be angry, you can remove that stimulus from your environment. (This point should be understood in combination with point 2 so that it doesn't sound trivial: you don't have direct access to information like what stimuli make you angry.) 
  6. Emotions are tools. You can trick your mind into having specific emotions, and you can trick your mind into having specific emotions in response to specific stimuli. This can be very useful; for example, tricking your mind into being more curious is a great way to motivate yourself to find stuff out, and tricking your mind into being happy in response to doing certain things is a great way to condition yourself to do certain things. Reward your inner pigeon.

Here are some specific actions I am going to take / have already taken because of what I learned at the workshop. 

  1. Write a lot more stuff down. What I can think about in my head is limited by the size of my working memory, but a piece of paper or a WorkFlowy document don't have this limitation. 
  2. Start using a better GTD system. I was previously using RTM, but badly. I was using it exclusively from my iPhone, and when adding something to RTM from an iPhone the due date defaults to "today." When adding something to RTM from a browser the due date defaults to "never." I had never done this, so I didn't even realize that "never" was an option. That resulted in having due dates attached to RTM items that didn't actually have due dates, and it also made me reluctant to add items to RTM that really didn't look like they had due dates (e.g. "look at this interesting thing sometime"), which was bad because that meant RTM wasn't collecting a lot of things and I stopped trusting my own due dates. 
  3. Start using Boomerang to send timed email reminders to future versions of myself. I think this might work better than using, say, calendar alerts because it should help me conceptualize past versions of myself as people I don't want to break commitments to. 

I'm also planning to take various actions that I'm not writing above but instead putting into my GTD system, such as practicing specific rationality techniques (the workshop included many useful worksheets for doing this) and investigating specific topics like speed-reading and meditation. 

The arc word (TVTropes warning) of this workshop was "agentiness." ("Agentiness" is more funtacular than "agency.") The CFAR curriculum as a whole could be summarized as teaching a collection of techniques to be more agenty. 

Miscellaneous

A distinguishing feature the people I met at the workshop seemed to have in common was the ability to go meta. This is not a skill which was explicitly mentioned or taught (although it was frequently implicit in the kind of jokes people told), but it strikes me as an important foundation for rationality: it seems hard to progress with rationality unless the thought of using your brain to improve how you use your brain, and also to improve how you improve how you use your brain, is both understandable and appealing to you. This probably eliminates most people as candidates for rationality training unless it's paired with or maybe preceded by meta training, whatever that looks like.

One problem with the workshop was lack of sleep, which seemed to wear out both participants and instructors by the last day (classes started early in the day and conversations often continued late into the night because they were unusually fun / high-value). Offering everyone modafinil or something at the beginning of future workshops might help with this.

Overall

Overall, while it's too soon to tell how big an impact the workshop will have on my life, I anticipate a big impact, and I strongly recommend that aspiring rationalists attend future workshops.