Does Goal Setting Work?
tl;dr There's some disagreement over whether setting goals is a good idea. Anecdotally, enjoyment in setting goals and success at accomplishing them varies between people, for various possible reasons. Publicly setting goals may reduce motivation by providing a status gain before the goal is actually accomplished. Creative work may be better accomplished without setting goals about it. 'Process goals', 'systems' or 'habits' are probably better for motivation than 'outcome' goals. Specific goals are probably easier on motivation than unspecified goals. Having explicit set goals can cause problems in organizations, and maybe for individuals.
Introduction
I experimented by letting go of goals for a while and just going with the flow, but that produced even worse results. I know some people are fans of that style, but it hasn’t worked well for me. I make much better progress — and I’m generally happier and more fulfilled — when I wield greater conscious control over the direction of my life.
The inherent problem with goal setting is related to how the brain works. Recent neuroscience research shows the brain works in a protective way, resistant to change. Therefore, any goals that require substantial behavioural change or thinking-pattern change will automatically be resisted. The brain is wired to seek rewards and avoid pain or discomfort, including fear. When fear of failure creeps into the mind of the goal setter it commences a de-motivator with a desire to return to known, comfortable behaviour and thought patterns.
I can’t read these two quotes side by side and not be confused.
There’s been quite a bit of discussion within Less Wrong and CFAR about goals and goal setting. On the whole, CFAR seems to go with it being a good idea. There are some posts that recognize the possible dangers: see patrissimo’s post on the problems with receiving status by publicly committing to goals. Basically, if you can achieve the status boost of actually accomplishing a goal by just talking about it in public, why do the hard work? This discussion came up fairly recently with the Ottawa Less Wrong group; specifically, whether introducing group goal setting was a good idea.
I’ve always set goals–by ‘always’ I mean ‘as far back as I can identify myself as some vaguely continuous version of my current self.’ At age twelve, some of my goals were concrete and immediate–“get a time under 1 minute 12 seconds for a hundred freestyle and make the regional swim meet cut.” Some were ambitious and unlikely–“go to the Olympics for swimming,” and “be the youngest person to swim across Lake Ontario.” Some were vague, like “be beautiful” or “be a famous novelist.” Some were chosen for bad reasons, like “lose 10 pounds.” My 12-year-old self wanted plenty of things that were unrealistic, or unhealthy, or incoherent, but I wanted them, and it seemed to make perfect sense to do something about getting them. I took the bus to swim practice at six am. I skipped breakfast and threw out the lunch my mom packed. Et cetera. I didn't write these goals down in a list format, but I certainly kept track of them, in diary entries among other things. I sympathize with the first quote, and the second quote confuses and kind of irritates me–seriously, Ray Williams, you have that little faith in people's abilities to change?
For me personally, I'm not sure what the alternative to having goals would be. Do things at random? Do whatever you have an immediate urge to do? Actually, I do know people like this. I know people whose stated desires aren’t a good predictor of their actions at all, and I’ve had a friend say to me “wow, you really do plan everything. I just realized I don’t plan anything at all.” Some of these people get a lot of interesting stuff done. So this may just be an individual variation thing; my comfort with goal setting, and discomfort with making life up as I go, might be a result of my slightly-Aspergers need for control. It certainly comes at a cost–the cost of basing self-worth on an external criterion, and the resulting anxiety and feelings of inadequacy. I have an enormous amount of difficulty with the Buddhist virtue of ‘non-striving.’
Why the individual variation?
The concepts of the motivation equation and success spirals give another hint at why goal-driven behaviour might vary between people. Nick Winter talks about this in his book The Motivation Hacker; he shows the difference between his past self, who had very low expectancy of success and set few goals, and his present self, with high expectancy of success and with goal-directed behaviour filling most of his time.
I actually remember a shift like this in my own life, although it was back in seventh grade and I’ve probably editorialized the memories to make a good narrative. My sixth grade self didn’t really have a concept of wanting something and thus doing something about it. At some point, over a period of a year or two, I experienced some minor successes. I was swimming faster, and for the first time ever, a coach made comments about my ‘natural talent.’ My friends wanted to get on the honour roll with an 80% average, and in first semester, both of them did and I didn’t; I was upset and decided to work harder, a concept I’d never applied to school, and saw results the next semester when my average was on par with theirs. It only took a few events like that, inconsequential in themselves, before my self-image was of someone who could reliably accomplish things through hard work. My parents helpfully reinforced this self-stereotype by making proud comments about my willpower and determination.
In hindsight I'm not sure whether this was a defining year; whether it actually made the difference, in the long run, or whether it was inevitable that some cluster of minor successes would have set off the same cascade later. It may be that some innate personality trait distinguishes the people who take those types of experiences and interpret them as success spirals from those who remained disengaged.
The More Important Question
Apart from the question of personal individual variation, though, there’s a more relevant question. Given that you’re already at a particular place on the continuum from planning-everything to doing-everything-as-you-feel-like-it, how much should you want to set goals, versus following urges? More importantly, what actions are helped versus harmed by explicit goal-setting.
Creative Goals
As Paul Graham points out, a lot of the cool things that have been accomplished in the past weren’t done through self-discipline:
One of the most dangerous illusions you get from school is the idea that doing great things requires a lot of discipline. Most subjects are taught in such a boring way that it's only by discipline that you can flog yourself through them. So I was surprised when, early in college, I read a quote by Wittgenstein saying that he had no self-discipline and had never been able to deny himself anything, not even a cup of coffee.
Now I know a number of people who do great work, and it's the same with all of them. They have little discipline. They're all terrible procrastinators and find it almost impossible to make themselves do anything they're not interested in. One still hasn't sent out his half of the thank-you notes from his wedding, four years ago. Another has 26,000 emails in her inbox.
I'm not saying you can get away with zero self-discipline. You probably need about the amount you need to go running. I'm often reluctant to go running, but once I do, I enjoy it. And if I don't run for several days, I feel ill. It's the same with people who do great things. They know they'll feel bad if they don't work, and they have enough discipline to get themselves to their desks to start working. But once they get started, interest takes over, and discipline is no longer necessary.
Do you think Shakespeare was gritting his teeth and diligently trying to write Great Literature? Of course not. He was having fun. That's why he's so good.
This seems to imply that creative goals aren’t a good place to apply goal setting. But I’m not sure how much this is a fundamental truth. I recently made a Beeminder goal for writing fiction, and I’ve written fifty pages since then. I actually don’t have the writer’s virtue of just sitting down and writing; in the past, I’ve written most of my fiction by staying up late in a flow state. I can’t turn this on and off, though, and more importantly, I have a life to schedule my writing around, and if the only way I can get a novel done is to stay up all night before a 12-hour shift at the hospital, I probably won’t write that novel. I rarely want to do the hard work of writing; it’s a lot easier to lie in bed thinking about that one awesome scene five chapters down the road and lamenting that I don’t have time to write tonight because work in the morning.
Even if Shakespeare didn’t write using discipline, I bet that he used habits. That he sat down every day with a pen and parchment and fully expected himself to write. That he had some kind of sacred writing time, not to be interrupted by urgent-but-unimportant demands. That he’d built up some kind of success spiral around his ability to write plays that people would enjoy.
Outcome versus process goals
Goal setting sets up an either-or polarity of success. The only true measure can either be 100% attainment or perfection, or 99% and less, which is failure. We can then excessively focus on the missing or incomplete part of our efforts, ignoring the successful parts. Fourthly, goal setting doesn't take into account random forces of chance. You can't control all the environmental variables to guarantee 100% success.
This quote talks about a type of goal that I don't actually set very often. Most of the ‘bad’ goals that I had as a 12-year-old were unrealistic outcome goals, and I failed to accomplish plenty of them; I didn’t go to the Olympics, I didn’t swim across Lake Ontario, and I never got down to 110 pounds. But I still have the self-concept of someone who’s good at accomplishing goals, and this is because I accomplished almost all of my more implicit ‘process’ goals. I made it to swim practice seven times a week, waking up at four-thirty am year after year. This didn’t automatically lead to Olympic success, obviously, but it was hard, and it impressed people. And yeah, I missed a few mornings, but in my mind 99% success or even 90% success at a goal is still pretty awesome.
In fact, I can’t think of any examples of outcome goals that I’ve set recently. Even “become a really awesome nurse” feels like more of a process goal, because it's something I'll keep doing on a day-to-day basis, requiring a constant input of effort.
Scott Adams, of Dilbert fame, refers to this dichotomy as ‘systems’ versus ‘goals’:
Just after college, I took my first airplane trip, destination California, in search of a job. I was seated next to a businessman who was probably in his early 60s. I suppose I looked like an odd duck with my serious demeanor, bad haircut and cheap suit, clearly out of my element. I asked what he did for a living, and he told me he was the CEO of a company that made screws. He offered me some career advice. He said that every time he got a new job, he immediately started looking for a better one. For him, job seeking was not something one did when necessary. It was a continuing process... This was my first exposure to the idea that one should have a system instead of a goal. The system was to continually look for better options.
Throughout my career I've had my antennae up, looking for examples of people who use systems as opposed to goals. In most cases, as far as I can tell, the people who use systems do better. The systems-driven people have found a way to look at the familiar in new and more useful ways.
...To put it bluntly, goals are for losers. That's literally true most of the time. For example, if your goal is to lose 10 pounds, you will spend every moment until you reach the goal—if you reach it at all—feeling as if you were short of your goal. In other words, goal-oriented people exist in a state of nearly continuous failure that they hope will be temporary.
If you achieve your goal, you celebrate and feel terrific, but only until you realize that you just lost the thing that gave you purpose and direction. Your options are to feel empty and useless, perhaps enjoying the spoils of your success until they bore you, or to set new goals and re-enter the cycle of permanent presuccess failure.
I guess I agree with him–if you feel miserable when you've lost 9 pounds because you haven't accomplished your goal yet, and empty after you've lost 10 pounds because you no longer have a goal, then whatever you're calling 'goal setting' is a terrible idea. But that's not what 'goal setting' feels like to me. I feel increasingly awesome as I get closer towards a goal, and once it's done, I keep feeling awesome when I think about how I did it. Not awesome enough to never set another goal again, but awesome enough that I want to set lots more goals to get that feeling again.
SMART goals
When I work with people as their coach and mentor, they often tell me they've set goals such as "I want to be wealthy," or "I want to be more beautiful/popular," "I want a better relationship/ideal partner." They don't realize they've just described the symptoms or outcomes of the problems in their life. The cause of the problem, that many resist facing, is themselves. They don't realize that for a change to occur, if one is desirable, they must change themselves. Once they make the personal changes, everything around them can alter, which may make the goal irrelevant.
Ray Williams
And? Someone has to change themselves to fix the underlying problem? Are they going to do that more successfully by going with the flow?
I think the more important dichotomy here is between vague goals and specific goals. I was exposed to the concept of SMART goals (specific, measurable, attainable, relevant, time-bound), at an early age, and though the concept has a lot of problems, the ability to Be Specific seems quite important. You can break down “I want to be beautiful” into subgoals like “I’ll learn to apply makeup properly”, “I’ll eat healthy and exercise”, “I’ll go clothing shopping with a friend who knows about fashion,” etc. All of these feel more attainable than the original goal, and it’s clear when they’re accomplished.
That being said, I have a hard time setting any goal that isn’t specific, attainable, and small. I’ve become more ambitious since meeting lots of LW and CFAR people, but I still don’t like large, long-term goals unless I can easily break them down into intermediate parts. This makes the idea of working on an unsolved problem, or in a startup where the events of the next year aren’t clear, deeply frightening. And these are obviously important problems that someone needs to motivate themselves to work on.
Problematic Goal-Driven Behaviour
We argue that the beneficial effects of goal setting have been overstated and that systematic harm caused by goal setting has been largely ignored. We identify specific side effects associated with goal setting, including a narrow focus that neglects non-goal areas, a rise in unethical behaviour, distorted risk preferences, corrosion of organizational culture, and reduced intrinsic motivation. Rather than dispensing goal setting as a benign, over-the-counter treatment for motivation, managers and scholars need to conceptualize goal setting as a prescription-strength medication that requires careful dosing, consideration of harmful side effects, and close supervision.
This is a fairly compelling argument against goal-setting; that by setting an explicit goal and then optimizing towards that goal, you may be losing out on elements that were being accomplished better before, and maybe even rewarding actual negative behaviour. Members of an organization presumably already have assigned tasks and responsibilities, and aren’t just doing whatever they feel like doing, but they might have done better with more freedom to prioritize their own work–the best environment is one with some structure and goals, but not too many. The phenomenon of “teaching to the test” for standardized testing is another example.
Given that humans aren’t best described as unitary selves, this metaphor extends to individuals. If one aspect of myself sets a personal goal to write two pages per day, another aspect of myself might respond by writing two pages on the easiest project I can think of, like a journal entry that no one will ever see. This violates the spirit of the goal it technically accomplishes.
A more problematic consideration is the relationship between intrinsic and extrinsic motivation. Studies show that rewarding or punishing children for tasks results in less intrinsic motivation, as measured by stated interest or by freely choosing to engage in the task. I’ve noticed this tendency in myself; faced with a nursing instructor who was constantly quizzing me on the pathophysiology of my patients’ conditions, I responded by refusing to be curious about any of it or look up the answers to questions in any more detail than what she demanded, even though my previous self loved to spend hours on Google making sense of confusing diseases. If this is a problem that affects individuals setting goals for themselves–i.e. if setting a daily writing goal makes writing less fun–then I can easily see how goal-setting could be damaging.
I also notice that I’m confused about the relationship between Beeminder’s extrinsic motivation, in the form of punishment for derailing, and its effects on intrinsic motivation. Maybe the power of success spirals to increase intrinsic motivation offsets the negative effect of outside reward/punishment; or maybe the fact that users deliberately choose to use Beeminder means that it doesn’t count as “extrinsic.” I’m not sure.
Conclusion
There seems to be variation between individuals, in terms of both generally purposeful behaviour, and comfort level with calling it ‘setting goals’. This might be related to success spirals in the past, or it might be a factor of personality and general comfort with order versus chaos. I’m not sure if it’s been studied.
In the past, a lot of creative behaviour wasn’t the result of deliberate goals. This may be a fundamental fact about creativity, or it may be a result of people’s beliefs about creativity (à la ego depletion only happens if you belief in ego depletion) or it may be a historical coincidence that isn’t fundamental at all. In any case, if you aren’t currently getting creative work done, and want to do more, I’m not sure what the alternative is to purposefully trying to do more. Manipulating the environment to make flow easier to attain, maybe. (For example, if I quit my day job and moved to a writers' commune, I might write more without needing to try on a day-to-day basis).
Process goals, or systems, are probably better than outcome goals. Specific and realistic goals are probably better than vague and ambitious ones. A lot of this may be because it’s easier to form habits and/or success spirals around well-specified behaviours that you can just do every day.
Setting goals within an organization has a lot of potential problems, because workers can game the system and accomplish the letter of the goal in the easiest possible way. This likely happens within individuals too. Research shows that extrinsic motivation reduces intrinsic motivation, which is important to consider, but I'm not sure how it relates to individuals setting goals, as opposed to organizations.
Meetup : Applied Rationality Talks: Thinking in Bayes
Discussion article for the meetup : Applied Rationality Talks: Thinking in Bayes
This is the second in a series of talks on CFAR and Less Wrong rationality topics offered to the Ottawa Skeptics meetup group. The format will be a fifteen-minute talk followed by drinks and structured discussion.
Discussion article for the meetup : Applied Rationality Talks: Thinking in Bayes
To what degree do you model people as agents?
The idea for this post came out of a conversation during one of the Less Wrong Ottawa events. A joke about being solipsist turned into a genuine question–if you wanted to assume that people were figments of your imagination, how much of a problem would this be? (Being told "you would be problematic if I were a solipsist" is a surprising compliment.)
You can rephrase the question as "do you model people as agents versus complex systems?" or "do you model people as PCs versus NPCs?" (To me these seem like a reframing of the same question, with a different connotation/focus; to other people they might seem like different questions entirely). Almost everyone at the table immediately recognized what we were talking about and agreed that modelling some people as agents and some people as complex systems was a thing they did. However, pretty much everything else varied–how much they modelled people as agents overall, how much it varied in between different people they knew, and how much this impacted the moral value that they assigned to other people. I suspect that another variable is "how much you model yourself as an agent"; this probably varies between people and impacts how they model others.
What does it mean to model someone as an agent?
The conversation didn't go here in huge amounts of detail, but I expect that due to typical mind fallacy, it's a fascinating discussion to have–that the distinctions that seem clear and self-evident to me probably aren't what other people use at all. I'll explain mine here.
1. Reliability and responsibility. Agenty people are people I feel I can rely on, who I trust to take heroic responsibility. If I have an unsolved problem and no idea what to do, I can go to them in tears and say "fix this please!" And they will do it. They'll pull out a solution that surprises me and that works. If the first solution doesn't work, they will keep trying.
In this sense, I model my parents strongly as agents–I have close to 100% confidence that they will do whatever it takes to solve a problem for me. There are other people who I trust to execute a pre-defined solution for me, once I've thought of it, like "could you do me a huge favour and drive me to the bike shop tomorrow at noon?" but whom I wouldn't go to with "AAAAH my bike is broken, help!" There are other people who I wouldn't ask for help, period. Some of them are people I get along with well and like a lot, but they aren't reliable, and they're further down the mental gradient towards NPC.
The end result of this is that I'm more likely to model people as agents if I know them well and have some kind of relationship where I would expect them to want to help me. Of course, this is incomplete, because there are brilliant, original people who I respect hugely, but who I don't know well, and I wouldn't ask or expect them to solve a problem in my day-to-day life. So this isn't the only factor.
2. Intellectual formidability. To what extent someone comes up with ideas that surprise me and seem like things I would never have thought of on my own. This also includes people who have accomplished things that I can't imagine myself succeeding at, like startups. In this sense, there are a lot of bloggers, LW posters, and people on the CFAR mailing list who are major PCs in my mental classification system, but who I may not know personally at all.
3. Conventional "agentiness". The degree to which a person's behaviour can be described by "they wanted X, so they took action Y and got what they wanted", as opposed to "they did X kind of at random, and Y happened." When people seem highly agenty to me, I model their mental processes like this–my brother is one of them. I take the inside view, imagining that I wanted the thing they want and had their characteristics, i.e. relative intelligence, domain-specific expertise, social support, etc, and this gives better predictions than past behaviour. There are other people whose behaviour I predict based on how they've behaved in the past, using the outside view, while barely taking into account what they say they want in the future, and this is what gives useful predictions.
This category also includes the degree to which people have a growth mindset, which approximates how much they expect themselves to behave in an agenty way. My parents are a good example of people who are totally 100% reliable, but don't expect or want to change their attitudes or beliefs much in the next twenty years.
These three categories probably don't include all the subconscious criteria I use, but they're the main ones I can think of.
How does this affect relationships with people?
With people who I model as agents, I'm more likely to invoke phrases like "it was your fault that X happened" or "you said you would do Y, why didn't you?" The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents. For people who I consider less agenty, whom I model more as complex systems, I'm more likely to skip the blaming step and jump right to "what are the things that made it hard for you to do Y? Can we fix them?"
On reflection, it seems like the latter is a healthier way to treat myself, and I know this (and consistently fail at doing this). However, I want to be treated like an agent by other people, not a complex system; I want people to give me the benefit of the doubt and assume that I know what I want and am capable of planning to get it. I'm not sure what this means for how I should treat other people.
How does this affect moral value judgements?
For me, not at all. My default, probably hammered in by years of nursing school, is to treat every human as worthy of dignity and respect. (On a gut level, it doesn't include animals, although it probably should. On an intellectual level, I don't think animals should be mistreated, but animal suffering doesn't upset me on the same visceral level that human suffering does. I think that on a gut level, my "circle of empathy" includes human dead bodies more than it includes animals).
One of my friends asked me recently if I got frustrated at work, taking care of people who had "brought their illness on themselves", i.e. by smoking, alcohol, drug use, eating junk food for 50 years, or whatever else people usually put in the category of "lifestyle choices." Honestly, I don't; it's not a distinction my brain makes. Some of my patients will recover, go home, and make heroic efforts to stay healthy; others won't, and will turn up back in the ICU at regular intervals. It doesn't affect how I feel about treating them; it feels meaningful either way. The one time I'm liable to get frustrated is when I have to spend hours of hard work on patients who are severely neurologically damaged and are, in a sense, dead already, or at least not people anymore. I hate this. But my default is still to talk to them, keep them looking tidy and comfortable, et cetera...
In that sense, I don't know if modelling different people differently is, for me, a morally a right or a wrong thing to do. However, I spoke to someone whose default is not to assign people moral value, unless he models them as agents. I can see this being problematic, since it's a high standard.
Conclusion
As usual for when I notice something new about my thinking, I expect to pay a lot of attention to this over the next few weeks, and probably notice some interesting things, and quite possibly change the way I think and behave. I think I've already succeeded in finding the source of some mysterious frustration with my roommate; I want to model her as an agent because of #1–she's my best friend and we've been through a lot together–but in the sense of #3, she's one of the least agenty people I know. So I consistently, predictably get mad at her for things like saying she'll do the dishes and then not doing them, and getting mad doesn't help either of us at all.
I'm curious to hear what other people think of this idea.
Making Rationality General-Interest
Introduction
Less Wrong currently represents a tiny, tiny, tiny segment of the population. In its current form, it might only appeal to a tiny, tiny segment of the population. Basically, the people who have a strong need for cognition, who are INTx on the Myers-Briggs (65% of us as per 2012 survey data), etc.
Raising the sanity waterline seems like a generally good idea. Smart people who believe stupid things, and go on to invest resources in stupid ways because of it, are frustrating. Trying to learn rationality skills in my 20s, when a bunch of thought patterns are already overlearned, is even more frustrating.
I have an intuition that a better future would be one where the concept of rationality (maybe called something different, but the same idea) is normal. Where it's as obvious as the idea that you shouldn't spend more money than you earn, or that you should live a healthy lifestyle, etc. The point isn't that everyone currently lives debt-free, eats decently well and exercises; that isn't the case; but they are normal things to do if you're a minimally proactive person who cares a bit about your future. No one has ever told me that doing taekwondo to stay fit is weird and culty, or that keeping a budget will make me unhappy because I'm overthinking thing.
I think the questions of "whether we should try to do this" and "if so, how do we do it in practice?" are both valuable to discuss, and interesting.
Is making rationality general-interest a good goal?
My intuitions are far from 100% reliable. I can think of a few reasons why this might be a bad idea:
1. A little bit of rationality can be damaging; it might push people in the direction of too much contrarianism, or something else I haven't thought of. Since introspection is imperfect, knowing a bit about cognitive biases and the mistakes that other people make might make people actually less likely to change their mind–they see other people making those well-known mistakes, but not themselves. Likewise, rationality taught only as a tool or skill, without any kind of underlying philosophy of why you should want to believe true things, might cause problems of a similar nature to martial art skills taught without the traditional, often non-violent philosophies–it could result in people abusing the skill to win fights/debates, making the larger community worse off overall. (Credit to Yan Zhang for martial arts metaphor).
2. Making the concepts general-interest, or just growing too fast, might involve watering them down or changing them in some way that the value of the LW microcommunity is lost. This could be worse for the people who currently enjoy LW even if it isn't worse overall. I don't know how easy it would be to avoid, or whether
3. It turns out that rationalists don't actually win, and x-rationality, as Yvain terms it, just isn't that amazing over-and-above already being proactive and doing stuff like keeping a budget. Yeah, you can say stuff like "the definition of rationality is that it helps you win", but if in real life, all the people who deliberately try to increase their rationality do worse off overall, by their own standards (or even equally well, but with less time left over for other fun pursuits) than the people who aim for their life goals directly, I want to know that.
4. Making rationality general-interest is a good idea, but not the best thing to be spending time and energy on right now because of Mysterious Reasons X, Y, Z. Maybe I only think it is because of my personal bias towards liking community stuff (and wishing all of my friends were also friends with each other and liked the same activities, which would simplify my social life, but probably shouldn't happen for good reasons).
Obviously, if any of these are the case, I want to know about it. I also want to know about it if there are other reasons, off my radar, why this is a terrible idea.
What has to change for this to happen?
I don't really know, or I would be doing those things already (maybe, akrasia allowing). I have some ideas, though.
1. The jargon thing. I'm currently trying to compile a list of LW/CFAR jargon as a project for CFAR, and there are lots of terms I don't know. There are terms that I've realized in retrospect that I was using incorrectly all along. This presents both a large initial effort for someone interested in learning about rationality via the LW route, and also might contribute to the looking-like-a-cult thing.
2. The gender ratio thing. This has been discussed before, and it's a controversial thing to discuss, and I don't know how much arguing about it in comments will present any solutions. It seems pretty clear that if you want to appeal to the whole population, and a group that represents 50% of the general population only represents 10% of your participants (also as per 2012 survey data, see link above), there's going to be a problem somewhere down the road.
My data point: as a female on LW, I haven't experienced any discrimination, and I'm a bit baffled as to why the gender ratio is so skewed in the first place. Then again, I've already been through the filter of not caring if I'm the only girl at a meetup group. And I do hang out in female-dominated groups (i.e. the entire field of nursing), and fit in okay, but I'm probably not all that good as a typical example to generalize from.
3. LW currently appeals to intelligent people, or at least people who self-identify as intelligent; according to the 2012 survey data, the self-reported IQ median is 138. This wouldn't be surprising, and isn't a problem until you want to appeal to more than 1% of the population. But intelligence and rationality are, in theory, orthogonal, or at least not the same thing. If I suffered a brain injury that reduced my IQ significantly but didn't otherwise affects my likes and dislikes, I expect I would still be interested in improving my rationality and think it was important, perhaps even more so, but I also think I would find it frustrating. And I might feel horribly out of place.
4. Rationality in general has a bad rap; specifically, the Spock thing. And this isn't just affecting whether or not people thing Less Wrong the site is weird; it's affecting whether they want to think about their own decision-making.
This is only what I can think of in 5 minutes...
What's already happening?
Meetup groups are happening. CFAR is happening. And there are groups out there practicing skills similar or related to rationality, whether or not they call it the same thing.
Conclusion
Rationality, Less Wrong and CFAR have, gradually over the last 2-3 years, become a big part of my life. It's been fun, and I think it's made me stronger, and I would prefer a world where as many other people as possible have that. I'd like to know if people think that's a) a good idea, b) feasible, and c) how to do it practically.
How I Became More Ambitious
Follow-up to How I Ended Up Non-Ambitious
Living with yourself is a bit like having a preteen and watching them get taller; the changes happen so slowly that it's almost impossible to notice them, until you stumble across an old point of comparison and it becomes blindingly obvious. I hit that point a few days ago, while planning what I might want to talk about during an OkCupid date. My brain produced the following thought: "well, if this topic comes up, it might sound like I'm trying to take over the world, and that's intimidating- Wait. What?"
I'm not trying to take over the world. It sounds like a lot of work, and not my comparative advantage. If it seemed necessary, I would point out the problems that needed solving and delegate them to CFAR alumni with more domain-specific expertise than me.
However, I went back and reread the post linked at the beginning, and I no longer feel much kinship with that person. This is a change that happened maybe 25-50% deliberately, and the rest by drift, but I still changed my mind, so I will try to detail the particular changes, and what I think led to them. Introspection is unreliable, so I'll probably be at least 50% wrong, but what can you do?
1. Idealism versus practicality
I would still call myself practical, but I no longer think that this comes at the expense of idealism. Idealism is absolutely essential, if you want to have a world that changes because someone wanted it to, as opposed to just by drift. Lately in the rationalist/CFAR/LW community, there's been a lot of emphasis on agency and agentiness, which basically mean the ability to change the world and/or yourself deliberately, on purpose, through planned actions. This is hard. The first step is idealism-being able to imagine a state of affairs that is different and better. Then comes practicality, the part where you sit down and work hard and actually get something done.
It's still true that idealism without practicality doesn't get much done, and practicality without idealism can get a lot done, but it matters what problems you're working on, too. Are you being strategic? Are you even thinking, at all, about whether your actions are helping to accomplish your goals? One of the big things I've learned, a year and a half and two CFAR workshops later, is how automatic and easy this lack of strategy really is.
I had a limited sort of idealism in high school; I wanted to do work that was important and relevant; but I was lazy about it. I wanted someone to tell me what was important to be doing right now. Nursing seemed like an awesome solution. It still seems like a solution, but recently I've admitted to myself, with a painful twinge, that it might not be the best way for me, personally, to help the greatest number of people using my current and potential skill set. It's worth spending a few minutes or hours looking for interesting and important problems to work on.
I don't think I had the mental vocabulary to think that thought a year and a half ago. Some of the change comes from having dated an economics student. Come to think of it, I expect some of his general ambition rubbed off on me, too. The rest of the change comes from hanging out with the effective altruism and similar communities.
I'm still practical. I exercise, eat well, go to bed on time, work lots of hours, spend my money wisely, and maintain my social circle mostly on autopilot; it requires effort but not deliberate effort. I'm lucky to have this skill. But I no longer think it's a virtue over and above idealism. Practical idealists make the biggest difference, and they're pretty cool to hang out with. I want to be one when I grow up.
2. Fear of failure
Don't get me wrong. If there's one deep, gripping, soul-crushing terror in my life, one thing that gives me literal nightmares, it's failure. Making mistakes. Not being good enough. Et cetera.
In the past few years, the main change has been admitting to myself that this terror doesn't make a lot of sense. First of all, it's completely miscalibrated. As Eliezer pointed out during a conversation on this, I don't fail at things very often. Far from being a success, this is likely a sign that the things I'm trying aren't nearly challenging enough.
My threshold for what constitutes failure is also fairly low. I made a couple of embarrassing mistakes during my spring clinical. Some part of my brain is convinced that this equals permanent failure; I wasn't perfect during the placement, and I can't go back and change the past, thus I have failed. Forever.
I passed the clinical, wrote the provincial exam (results aren't in but I'm >99% confident I passed) (EDIT: Passed! YEAAHHH!!!), and I'm currently working in the intensive care unit, which has been my dream since I was about fifteen. The part of my brain that keeps telling me I failed permanently obviously isn't saying anything useful.
I think 'embarrassing' is a keyword here. The first thing I thought, on the several occasions that I made mistakes, was "oh my god did I just kill someone... Phew, no, no harm done." The second thought was "oh my god, my preceptor will think I'm stupid forever and she'll never respect me and no one wants me around, I'm not good enough..." This line of thought never goes anywhere good. It says something about me, though, that "I'm not good enough" is very directly connected to people wanting me around, to belonging somewhere. For several personality-formative years of my life, people didn't want me around. Probably for good reason; my ten-year-old self was prickly and socially inept and miserable. I think a lot of my determination not to seek status comes from the "uncool kids trying to be cool are pathetic" meme that was so rampant when I was in sixth grade.
Oh, and then there's the traumatic swim team experience. Somewhere, in a part of my brain where I don't go very often nowadays, there a bottomless whirlpool of powerless rage and despair around the phrase "no matter how hard I try, I'll never be good enough." So when I make an embarrassing mistake, my ten-year-old self is screaming at me "no wonder everyone hates you!" and my fourteen-year-old self is sadly muttering that "you know, maybe you just don't have enough natural talent," and none of it is at all useful.
The thing about those phrases is that they refer to complex and value-laden concepts, in a way that makes them seem like innate attributes, à la Fundamental Attribution Error. "Not good enough" isn't a yes-or-no attribute of a person; it's a magical category that only sounds simple because it's a three-word phrase. I've gotten somewhat better at propagating this to my emotional self. Slightly. It's a work in progress.
During a conversation about this with Anna Salamon, she noted that she likes to approach her own emotions and ask them what they want. It sounds weird, but it's helpful. "Dear crushing sense of despair and unworthiness, what do you want? ...Oh, you're worried that you're going to end up an outcast from your tribe and starve to death in the wilderness because you accidentally gave an extra dose of digoxin? You want to signal remorse and regret and make sure everyone knows you're taking your failure seriously so that maybe they'll forgive you? Thank you for trying to protect me. But really, you don't need to worry about the starving-outcast thing. No one was harmed and no one is mad at you personally. Your friends and family couldn't care less. This mistake is data, but it's just as much data about the environment as it is about your attributes. These hand-copied medication records are the perfect medium for human error. Instead of signalling remorse, let's put some mental energy into getting rid of the environmental conditions that led to this mistake."
Rejection therapy and having a general CoZE [Comfort Zone Expansion] mindset helped remove some of the sting of "but I'll look stupid if I try something too hard and fail at it!" I still worry about the pain of future embarrassment, but I'm more likely to point out to myself that it's not a valid objection and I should do X anyway. Making "I want to become stronger" an explicit motto is new to the last year and a half, too, and helps by giving me ammunition for why potential embarrassment isn't a reason not to do something.
In conclusion: failure still sucks. I'm a perfectionist. But I failed in a lot of small ways during my spring clinical, and passed/got a job anyway, which seems to have helped me propagate to my emotional self that it's okay to try hard things, where I'm almost certain to make mistakes, because mistakes don't equal instant damnation and hatred from all of my friends.
3. The morality of ambition
While I was in San Francisco a month ago, volunteering at the CFAR workshop and generally spending my time surrounded by smart, passionate, and ambitious people (thus convincing my emotional system that this is normal and okay), I had a conversation with Eliezer. He asked me to list ten areas in which I was above average.
This was a lot more painful than it had any reason to be. After bouncing off various poorly-formed objections in my mind, I said to myself "you know, having trouble admitting what you're good at doesn't make you virtuous." This was painful; losing a source of feeling-virtuous always is. But it was helpful. Yeah, talking all the time about how awesome you are at X, Y, Z makes you a bit of a bore. People might even avoid you (oh! the horror!). However, this doesn't mean that blocking even the thought of being above average makes you a good person. In fact, it's counterproductive. How are you supposed to know what problems you're capable of solving in the world if you can't be honest with yourself about your capabilities?
This conversation helped. (Even if some of the effect was "high status person says X -> I believe X," who cares? I endorsed myself changing my mind about this a year and a half ago. It's about time.)
HPMOR helped, too; specifically, the idea that there are four houses which have different positive qualities. Slytherins are demonized in canon, but in HPMOR their skills are recognized as essential. I can easily recognize the Ravenclaw and Hufflepuff and even the Gryffindor in myself, but not much of Slytherin. Having a word for the ambition-cunning-strategic concept cluster is helpful. I can ask myself "now what would a Slytherin do with this information:?" I can think thoughts that feel very un-virtuous. "I'm young and prettier than average. What's a Slytherin way to use this... Oh, I suppose I can leverage it to get high-status men to pay attention to me long enough for me to explain the merits of an idea I have." This thought feels yuck, but the universe doesn't explode.
Probably the biggest factor was going to the CFAR workshops in the first place. Not from any of the curriculum, particularly, although the mindset of goal factoring helped me to realize that the mental action of "feeling unvirtuous for thinking in ambitious or calculating ways" wasn't accomplishing anything I wanted. Mostly the change came from social normalization, from hanging out with people who talked openly about their strengths and weaknesses, and no one got shunned.
[Silly plan for taking over the world: Arrange to meet high status-people and offer to give their children swimming lessons. Gain their trust. Proceed from there.]
4. Laziness
Nope. Still lazy. If anything, akrasia and procrastination are more of a problem now that I'm trying to do harder things more deliberately.
I've been keeping written goals for about a year now. This means I actually notice when I don't accomplish them.
I use Remember the Milk as a GTD system, and some other productivity/organization software (rescuetime, Mint.com, etc). I finally switched to Gmail, where I can use Boomerang and other useful tools. My current openness to trying new organization methods is high.
My general interest in trying things is higher, mainly because I have lots of community-endorsed-warm-fuzzies positive affect around that phrase. I want to be someone who's open to new experiences; I've had enough new experiences to realize how exhilarating they can be.
Conclusion
I now have a wider range of potentially high-value personal projects ongoing. I now have an explicit goal of being well-known for non-fiction writing, probably in a blog form, in the next five years. (Do I have enough interesting things to say to make this a reality? We'll see. Is this goal vague? Yes. Working on it. I used to reject goals if they weren't utterly concrete, but even vague goals are something to build on).
I'm more explicit with myself about what I want from CFAR curriculum skills. (The general problem of critical thinking in nursing? Solvable! Why not?)
I think I've finally admitted to myself that "well, I'll just live in a cozy little house near my parents and work in the ICU and raise kids for the next forty years" might not be particularly virtuous or fun. There are things I would prefer to be different in the world, even if I can only completely specify a few of them. There are exciting scary opportunities happening all the time. I'm lucky enough to belong to a community of people that can help me find them.
I don't have plans for much beyond the next year. But here's to the next decade being interesting!
The Centre for Applied Rationality: a year later from a (somewhat) outside perspective
I recently had the privilege of being a CFAR alumni volunteering at a later workshop, which is a fascinating thing to do, and put me in a position both to evaluate how much of a difference the first workshop actually made in my life, and to see how the workshops themselves have evolved.
Exactly a year ago, I attended one of the first workshops, back when they were still inexplicably called “minicamps”. I wasn't sure what to expect, and I especially wasn't sure why I had been accepted. But I bravely bullied the nursing faculty staff until they reluctantly let me switch a day of clinical around, and later stumbled off my plane into the San Francisco airport in a haze of exhaustion. The workshop spat me out three days later, twice as exhausted, with teetering piles of ideas and very little time or energy to apply them. I left with a list of annual goals, which I had never bothered to have before, and a feeling that more was possible–this included the feeling that more would have been possible if the workshop had been longer and less chaotic, if I had slept more the week before, if I hadn't had to rush out on Sunday evening to catch a plane and miss the social.
Like I frequently do on Less Wrong the website, I left the minicamp feeling a bit like an outsider, but also a bit like I had come home. As well as my written goals, I made an unwritten pre-commitment to come back to San Francisco later, for longer, and see whether I could make the "more is possible" in my head more specific. Of my thirteen written goals on my list, I fully accomplished only four and partially accomplished five, but I did make it back to San Francisco, at the opportunity cost of four weeks of sacrificed hospital shifts.
A week or so into my stay, while I shifted around between different rationalist shared houses and attempted to max out interesting-conversations-for-day, I found out that CFAR was holding another May workshop. I offered to volunteer, proved my sincerity by spending 6 hours printing and sticking nametags, and lived on site for another 4-day weekend of delightful information overload and limited sleep.
Before the May 2012 workshop, I had a low prior that any four-day workshop could be life-changing in a major way. A four-year nursing degree, okay–I've successfully retrained my social skills and my ability to react under pressure by putting myself in particular situations over and over and over and over again. Four days? Nah. Brains don't work that way.
In my experience, it's exceedingly hard for the human brain to do anything deliberately. In Kahneman-speak, habits are System 1, effortless and automatic. Doing things on purpose involves System 2, effortful and a bit aversive. I could have had a much better experience in my final intensive care clinical if I'd though to open up my workshop notes and tried to address the causes of aversions, or use offline time to train habits, or, y'know, do anything on purpose instead of floundering around trying things at random until they worked.
(The again, I didn't apply concepts like System 1 and System 2 to myself a year ago. I read 'Thinking Fast and Slow' by Kahneman and 'Rationality and the Reflective Mind' by Stanovich as part of my minicamp goal 'read 12 hard nonfiction books this year', most of which came from the CFAR recommended reading list. If my preceptor had had any idea what I was saying when I explained to her that she was running particular nursing skills on System 1, because they were engrained on the level of habit, and I was running the same tasks on System 2 in working memory because they were new and confusing to me, and that was why I appeared to have poor time management, because System 2 takes forever to do anything, this terminology might have helped. Oh, for the world where everyone knows all jargon!)
...And here I am, setting aside a month of my life to think only about rationality. I can't imagine that my counterfactual self-who-didn't-attend-in-May-2012 would be here. I can't imagine that being here now will have zero effect on what I'm doing in a year, or ten years. Bingo. I did one thing deliberately!
So what was the May 2013 workshop actually like?
The curriculum has shifted around a lot in the past year, and I think with 95% probability that it's now more concretely useful. (Speaking of probabilities, the prediction markets during the workshop seemed to flow better and be more fun and interesting this time, although this may just show that I was more averse to games in general and betting in particular. In that case, yay for partly-cured aversions!)
The classes are grouped in an order that allows them to build on each other usefully, and they've been honed by practice into forms that successfully teach skills, instead of just putting words in the air and on flipcharts. For example, having a personal productivity system like GTD came across as a culturally prestigious thing at the last workshop, but there wasn't a lot of useful curriculum on it. Of course, I left on this trip wanting to spend my offline month creating with a GTD system better than paper to-do lists taped to walls, so I have both motivation and a low threshold for improvement.
There are also some completely new classes, including "Againstness training" by Valentine, which seem to relate to some of the 'reacting under pressure' stuff in interesting ways, and gave me vocabulary and techniques for something I've been doing inefficiently by trial and error for a good part of my life.
In general, there are more classes about emotions, both how to deal with them when they're in the way and how to use them when they're the best tool available. Given that none of us are Spock, I think this is useful.
Rejection therapy has morphed into a less terrifying and more helpful form with the awesome name of CoZE (Comfort Zone Expansion). I didn't personally find the original rejection therapy all that awful, but some people did, and that problem is largely solved.
The workshops are vastly more orderly and organized. (I like to think I contributed to this slightly with my volunteer skills of keeping the fridge stocked with water bottles and calling restaurants to confirm orders and make sure food arrived on time.) Classes began and ended on time. The venue stayed tidy. The food was excellent. It was easier to get enough sleep. Etc. The May 2012 venue had a pool, and this one didn't, which made exercise harder for addicts like me. CFAR staff are talking about solving this.
The workshops still aren't an easy environment for introverts. The negative parts of my experience in May 2012 were mostly because of this. It was easier this time, because as a volunteer I could skip classes if I started to feel socially overloaded, but periods of quiet alone time had to be effortfully carved out of the day, and at an opportunity cost of missing interesting conversations. I'm not sure if this problem is solvable without either making the workshops longer, in order to space the material out, and thus less accessible for people with jobs, or by cutting out curriculum. Either would impose a cost on the extroverts who don't want an hour at lunch to meditate or go running alone or read a sci-fi book, etc.
In general, I found the May 2012 workshop too short and intense–we had material thrown at us at a rate far exceeding the usual human idea-digestion rate. Keeping in touch via Skype chats with other participants helped. CFAR now does official followups with participants for six weeks following the workshop.
Meeting the other participants was, as usual, the best part of the weekend. The group was quite diverse, although I was still the only health care professional there. (Whyyy???? The health care system needs more rationality so badly!) The conversations were engaging. Many of the participants seem eager to stay in touch. The May 2012 workshop has a total of six people still on the Skype chats list, which is a 75% attrition rate. CFAR is now working on strategies to help people who want to stay in touch do it successfully.
Conclusions?
I thought the May 2012 workshop was awesome. I thought the May 2013 workshop was about an order of magnitude more awesome. I would say that now is a great time to attend a CFAR workshop...except that the organization is financially stable and likely to still be around in a year and producing even better workshops. So I'm not sure. Then again, rationality skills have compound interest–the value of learning some new skills now, even if they amount more to vocab words and mental labels than superpowers, compounds over the year that you spend seeing all the books you read and all the opportunities you have in that framework. I'm glad I went a year ago instead of this May. I'm even more glad I had the opportunity to see the new classes and meet the new participants a year later.
Learning critical thinking: a personal example
Related to: Is Rationality Teachable
“Critical care nursing isn’t about having critically ill patients,” my preceptor likes to say, “it’s about critical thinking.”
I doubt she's talking about the same kind of critical thinking that philosophers are, and I find that definition abstract anyway. There’s been a lot of talk about critical thinking during our four years of nursing school, but our profs seem to have a hard time defining it. So I’ll go with a definition from Google.
Critical thinking can be seen as having two components: 1) a set of information and belief generating and processing skills, and 2) the habit, based on intellectual commitment, of using those skills to guide behaviour. It is thus to be contrasted with: 1) the mere acquisition and retention of information alone, because it involves a particular way in which information is sought and treated; 2) the mere possession of a set of skills, because it involves the continual use of them; and 3) the mere use of those skills ("as an exercise") without acceptance of their results.1
That’s basically rationality–epistemic, i.e. generating true beliefs, and instrumental, i.e. knowing how to use them to achieve what you want. Maybe part of me expected, implicitly, to have an easier time learning this skill because of my Less Wrong knowledge. And maybe I am more consciously aware of my mistakes, and the cognitive factors that caused them, than most of my classmates. When it’s forty-five minutes past the end of my shift and I’m still charting, I’m also calling myself out on succumbing to the planning fallacy. I once went through the first half hour of a shift during my pediatrics rotation thinking that one of my patients had cerebral palsy, when he actually had cystic fibrosis–all because I misread my prof’s handwriting as ‘CP’ when she’d written ‘CF’. I was totally confused by all the enzyme supplements on his list of meds, but it still took me a while to figure it out–a combination of priming and confirmation bias, taken to the next level.
But, overall, even if I know what I'm doing wrong, it hasn’t been easier to do things right. I have a hard time with the hospital environment, possibly because I’m the kind of person who ended up reading and posting on Less Wrong. My cognitive style leans towards Type 2 reasoning, in Keith Stanovich’s taxonomy–thorough, but slow. I like to understand things, on a deep level. I like knowing why I’m doing something, and I don’t trust my intuitions, the fast-and-dirty product of Type 1 reasoning. But Type 2 reasoning requires a lot of working memory, and humans aren’t known for that, which is the source of most of my frustration and nearly all of my errors–when working memory overload forces me to be a cognitive miser.
Still, for all the frustration, I’m pretty sure I’ve ended up in the perfect environment to learn this skill called ‘critical thinking.’ I’m way out of my depth–which I expected. No fourth year student is ready to work independently in a trauma ICU, but I decided to finish my schooling here in the name of tsuyoku naritai, and for all the days when I’ve gone home crying, it’s still worth it. I’m learning.
The skills
1. A set of information and belief generating and processing skills.
Medicine, and nursing, are a bit like physics, in that you need to generate true beliefs about systems that exist outside of you, and predict how they’re going to behave. This involves knowing a lot of abstract theory, which I’m good at, and a lot of heuristics and pattern-matching for applying the right bits of theory to particular patients, which I’m less good at. That’s partly an experience thing; my brain needs patterns to match to. But in general, I have decent mental models of my patients. I’m curious and I like to understand things. If I don’t know what part of the theories applies, I ask.
2. The habit, based on intellectual commitment, of using those skills to guide behaviour.
So you’ve got your mental model of your patient, your best understand of what’s actually going on, on a physiological and biochemical level, down under the skin where you can’t see it. You know what “normal” is for a variety of measures: vital signs, lung sounds, lab values, etc. Given that your patient is in the ICU, you know something’s abnormal, or they wouldn’t be there. Their diagnosis tells you what to expect, and you look at the results of your assessments and ask a couple of questions. One: is this what I expect, for this patient? Two: what do I need to do about it?
I’m not going to be surprised if a post-op patient has low hemoglobin. It’s information of a kind, telling the doctor whether or not the patient needs a transfusion, and how many units, but it’s not really new information, and a moderately abnormal value wouldn’t worry me or anyone else. If their hemoglobin keeps dropping; okay, they’re actively bleeding somewhere, that’s irritating, and possibly dangerous, and needs dealing with, but it’s not surprising.
But if a patient here for an abdominal surgery suddenly has decreased level of consciousness and their pupils aren’t reacting normally to light, I’m worried. There’s nothing in my mental model that says I should expect it. I notice I’m confused, and that confusion guides my behaviour; I call the doctor right away, because we need more information to update our collective mental model, information you can’t get just from observation, like a CT scan of the head. (Even this is optimistic–plenty of patients are admitted to the ICU because we have no idea what’s wrong with them, and are hoping to keep them alive long enough to find out.)
The basics of ICU nursing come down to treating numbers. Heart rate, blood pressure, oxygen saturations, urine output, etc; know the acceptable range, notice if they change, and use Treatment X to get them back where they’re supposed to be. Which doesn’t sound that hard. But implicit in ‘notice if they change’ is ‘figure out why they changed’, because that affects how you treat them, and implicit in that is a lot of background knowledge, which has to be put in context.
I’m, honestly, fairly terrible at this. It’s a compartmentalization thing. I don’t like using my knowledge as input arguments to generate new conclusions and then relying on those conclusions to treat human beings. It feels like guessing. Even though, back in high school, I never really needed to study for physics tests–if I understood what we’d learned, I could re-derive forgotten details from first principles. But hospital patients ended up in a non-overlapping magisterium in my head. In order for me to trust my knowledge, it has to have come directly from the lips of a teacher or experienced nurse.
My preceptor, who hates this. “She needs to continue to work on her critical thinking when it comes to caring for critically ill patients,” she wrote on my evaluation. “She knows the theory, and is now working to apply it to ICU nursing.” Shorthand for, she knows the theory, but getting her to apply it to ICU nursing is like pulling teeth. A number of our conversations have gone like this:
Me: “Our patient’s blood pressure dropped a bit.”
Her: “Yeah, it did. What do you want to do about it?”
Me: “I, uh, I don’t know... Should I increase the vasopressors?”
Her: “I don’t know, should you?”
Me: “Uh, maybe I should increase the phenylephrine to 40 mcg/min and see what happens. How long should I wait to see?”
Her: “You tell me.”
Me: “Well, let’s say it’ll take a few minutes for what’s in the tubing now to get pushed through, and it should take effect pretty quickly because it’s IV, like a minute... So if his blood pressure’s not up enough in five minutes, I’ll increase the phenyl to 60. Does that sound okay?”
Her: “It’s your decision to make."
Needless to say, I find this teaching method extremely stressful and scary, and I’m learning about ten times more than I would if she answered the questions I asked. Because “the mere acquisition and retention of information alone” isn’t my problem. I have a brain like an encyclopaedia. My problem, in the critical care nursing context, is the “particular way in which information is sought and treated.” I need to know the right time to notice something is wrong, the right place to look in my encyclopaedia, and the right way to take the information I just looked up and figure out what to do with it.
The mistakes
Some of my errors, unsurprisingly, boil down to a failure to override inappropriate Type 1 responses with Type 2 responses–in other words, not thinking about what I’m doing. But most of them are more of a mindware gap–I don’t yet have the “domain-specific knowledge sets” that the nurses around me have. Not just theory knowledge; I do have most of that; but the procedural habits of how to stay organized and prioritize and dump the contents of my working memory onto paper in a way that I can read them back later. Usually, when I make a mistake, I knew better, but the part of my brain that knew better was doing something else at the time, that small note of confusion getting lost in the general chaos.
Pretty much all nurses keep a “feuille de route”–I have yet to find a satisfactory English word for this, but it’s a personal sheet of paper, not legal charting, usually kept in a pocket, and used as an extended working memory. In med/surg, when I had four patients, I made a chart with four columns; name and personal information, medications, treatments/general plan for the day, and medical history; and as many rows as I had patients. If something was important, I circled it in red ink. This system doesn’t work in the ICU, so my current feuille de route has several aspects. I fold a piece of blank paper into four, and take notes from the previous shift report on one quarter of one side, or two quarters if it’s a long report. Across from that, I draw a vertical column of times, from 8:00 am to 6:00 pm (or 8:00 pm to 6:00 am). 7:00 pm and 7:00 am are shift change, so nothing else really gets done for that hour. I use this to scribble down what I need to get down during my twelve hours, and approximately when I want to do it, and I prioritize, i.e. from 1 to 5 most to least important. Once it’s done, I cross it off–then I can forget about it. On the other side of the paper, I make a cheat sheet for giving report to the next nurse, or presenting my patient to the doctors at rounds.
This might be low-tech and simple, but it takes a huge load off my working memory, and reduces my most frequent error, which is to get so overwhelmed and frazzled that my brain goes on strike. In other words, the failure to override Type 1 responses due to the lack of cognitive capacity to run a Type 2 process. It’s drastically cut down on the frequency of this mental conversation:
Me: “I turned off the sedation, and my patient isn’t waking up as fast as I expected. I notice I’m confused–”
My brain: “You’re always confused! Everything around here is intensely confusing! How am I supposed to use that as information?”
Odd as it might sound, I often don’t notice when my brain starts edging towards a meltdown. The feeling itself is quite recognizable, but the circumstances that lead to it, i.e. overloaded working memory, mean that I’m not usually paying attention to my own feelings.
“You need to stop and take a breath,” my preceptor says about fifty times a day. Easier said than done–but it’s more efficient, overall, to have a tiny part of my mind permanently on standby, keeping an eye on my emotions, noticing when the gears start to overheat. Then stop, take a breath, and let go of everything except the task at hand, trusting myself to have created enough cues in my environment to retrieve the other tasks, once I’m done. Humans don’t multitask well. Doing one thing while trying to remember a list of five others is intense multitasking, and it’s no wonder it’s exhausting.
The implications
“You can’t teach critical thinking,” my preceptor says, but I’m pretty sure that’s exactly what she’s doing right now. A great deal of what I already know is domain-specific to nursing, but most of what I’m learning right now is generally applicable. I’m learning the procedural skills to work through difficult problems, under what Keith Stanovich would call average rather than optimal conditions. Sitting in my own little bubble in front of a multiple choice exam–that’s optimal conditions. Trying to figure out if I should be surprised or worried about my patient’s increased heart rate, while simultaneously deciding whether or not I can ignore the ventilator alarm and whether I can finish giving my twelve o’clock antibiotic before I need to do twelve o’clock vitals–that’s not just average conditions, it’s under-duress conditions.
I’m hoping that after a few more weeks, or maybe a few more years, I’ll be able to perform comfortably in this intensely terrifying environment. And I’m hoping that some of the skills I learn will be general-purpose, for me at least. It’d be nice if they were teachable to others, too, but I think my preceptor might be right about one thing–you can’t teach this kind of critical thinking in the classroom. It's about moulding my brain into the right shape, and everyone's brain starts out in a different shape, so the mould has to be personalized.
But the habits are general ones. Notice when you're faced with a difficult problem, or making an important decision. Notice that you're doing this while distracted. Stop and take a breath. Get out a piece of paper. Figure out how the problem is formatted in your mind, and format it that way on the paper. (This is probably the hardest part). Dump your working memory and give yourself space to think. Prioritize from 1 to n. Keep an eye on the evolving situation, sure, but find that moment of concentration in the midst of chaos, and solve the problem.
Of course, it's far from guaranteed that this will work. I'm making an empirical prediction; that the skills I'm currently learning will be transferable to non-nursing areas, and that they'll make a difference in my life outside of work. I'll be on the lookout for examples, either of success or failure.
References
Scriven, Michael; Paul, Richard. Defining critical thinking. (2011). The critical thinking community. http://www.criticalthinking.org/pages/defining-critical-thinking/410
Study on depression
I am currently running a study on depression, in collaboration with Shannon Friedman (http://lesswrong.com/user/ShannonFriedman/overview/). If you are interested in participating, the study involves filling out a survey and will take a few minutes of your time (half an hour would be very generous), most likely once a week for four weeks. Send me an email at mdixo100@uottawa.ca, and I can give you more details.
Thank you!
Playing the student: attitudes to learning as social roles
This is a post about something I noticed myself doing this year, although I expect I’ve been doing it all along. It’s unlikely to be something that everyone does, so don’t be surprised if you don’t find this applies to you. It's also an exercise in introspection, i.e. likely to be inaccurate.
Intro
If I add up all the years that I’ve been in school, it amounts to about 75% of my life so far–and at any one time, school has probably been the single activity that I spend the most hours on. I would still guess that 50% or less of my general academic knowledge was actually acquired in a school setting, but school has tests, and grades at the end of the year, and so has provided most of the positive/negative reinforcement related to learning. The ‘attitudes to learning’ that I’m talking about apply in a school setting, not when I’m learning stuff for fun.
Role #1: Overachiever
Up until seventh grade, I didn’t really socialize at school–but once I started talking to people, it felt like I needed a persona, so that I could just act ‘in character’ instead of having to think of things to say from scratch. Being a stereotypical overachiever provided me with easy material for small talk–I could talk about schoolwork to other people who were also overachievers.
Years later, after acquiring actual social skills in the less stereotyped environments of part-time work and university, I play the overachiever more as a way of reducing my anxiety in class. (School was easy for me up until my second year of nursing school, when we started having to do scary things like clinical placements and practical exams, instead of nice safe things like written exams.) If I can talk myself into always being curious and finding everything exciting and interesting and cool I want to do that!!!, I can’t find everything scary–or, at the very least, to other people it looks like I’m not scared.
Role #2: Too Cool for School
This isn’t one I’ve played too much, aside from my tendency to put studying for exams as maybe my fourth priority–after work, exercise, and sleep–and still having an A average. (I will still skip class to work a shift at the ER any day, but that doesn’t count–working there is almost more educational than class, in my mind.) As one of my LW Ottawa friends pointed out, there’s a sort of counter-signalling involved in being a ‘lazy’ student–if you can still pull off good grades without doing any work, you must be smart, so people notice this and respect it.
My brother is the prime example of this. He spent grades 9 through 11 alternately sleeping and playing on his iPhone in class, and maintained an average well over 80%. In grade 12 he started paying attention in class and occasionally doing homework, and graduated with, I believe, an average over 95%. He had a reputation throughout the whole school–as someone who was very smart, but also cool.
Role #3: Just Don’t Fail Me!
Weirdly enough, it wasn’t at school that I originally learned this role. As a teenager, I did competitive swimming. The combination of not having outstanding talent for athletics, plus the anxiety that came from my own performance depending on how fast the other swimmers were, made this about 100 times more terrifying than school. At some point I developed a weird sort of underconfidence, the opposite of using ‘Overachiever’ to deal with anxiety. My mind has now created, and made automatic, the following subroutine: “when an adult takes you aside to talk to you about anything related to ‘living up to your potential’, start crying.” I’m not sure what the original logic behind this was: get the adult to stop and pay attention to me? Get them to take me more seriously? Get them to take me less seriously? Or just the fact that I couldn’t stomach the fact of being ordinarily below average at something–I had to be in some way differently below average. Who knows if there was much logic behind it at all?
Having this learned role comes back to bite me now, sometimes–the subroutine gets triggered in any situation that feels too much like my swim coach’s one-on-one pre-competition pep talks. Taekwondo triggers it once in a while. Weirdly enough, being evaluated in clinicals triggers it too–this didn’t originally make much sense, since it’s not competitive in the sense of ‘she wins, I lose.’ I think the associative chain there is through lifeguarding courses–the hands-on evaluation aspect used to be fairly terrifying for my younger self, and my monkey brain puts clinicals and lab evaluations into that category, as opposed to the nice safe category of written exams, where I can safely be Too Cool for School and still get good grades.
The inconvenience of thinking about school this way really jumped out at me this fall. I started my semester of clinicals with a prof who was a) spectacularly non-intimidating compared to some others I’ve had, and b) who liked me from the very start, basically because I raised my hand a lot and answered questions intelligently during our more classroom-y initial orientation. I was all set up for a semester of playing ‘Overachiever’, until, quite near the beginning of the semester, I was suddenly expected to do something that I found scary, and I was tired and scared of looking confident but being wrong, and I fell back on ‘Just Don’t Fail Me!’ My prof was, understandably, shocked and confused as to why I was suddenly reacting to her as ‘the scary adult who has the power to pass or fail me and will definitely fail me unless I’m absolutely perfect, so I had better grovel.’ I think she actually felt guilty about whatever she had done to intimidate me–which was nothing.
Since then I’ve been doing fine, progressing at the same rate as all the other students (maybe it says something about me that this isn’t very satisfying, and even kind of feels like failure in itself...I would like to be progressing faster). That is, until I’m alone with my prof and she tries to give me a pep talk about how I’m obviously very smart and doing fine, so I just need to improve my confidence. Then I start crying. At this point, I’m pretty sure she thinks I should be on anti-depressants–which is problematic in itself, but could be more problematic if she was the kind of prof who might fail me in my clinical for a lack of confidence. There’s no objective reason why I can’t hop back into Overachiever mode, since I managed both my clinicals last spring entirely in that mode. But part of my brain protests: ‘she’s seen you being insecure! She wouldn’t believe you as an overachiever, it would be too out of character!’ It starts to make sense once I stop seeing this behaviour as 'my learning style' and recognize it as a social role that I, at some point, probably subconsciously, decided I ought to play.
Conclusion
The main problem seems to be that my original mental models for social interaction–with adults, mostly–are overly simplistic and don’t cut reality at the joints. That’s not a huge problem in itself–I have better models now and most people I meet now say I have good communication skills, although I sometimes still come across as ‘odd’. The problem is that every once in a while, a situation happens, pattern recognition jumps into play, and whoa, I’m playing ‘Just Don’t Fail Me’. (It’s happened with the other two roles too, but they’re is less problematic.) Then I can’t get out of that role easily, because my social monkey brain is telling me it would be out of character and the other person would think it was weird. This is despite the fact that I no longer consciously care if I come across as weird, as long as people think I’m competent and trustworthy and nice, etc.
Just noticing this has helped a little–I catch my monkey brain and remind it ‘hey, this situation looks similar to Situation X that you created a stereotyped response for, but it’s not Situation X, so how about we just behave like a human being as usual’. Reminding myself that the world doesn’t break down into ‘adults’ and ‘children’–or, if it did once, I’m now on the other side of the divide–also helps. Failing that, I can consciously try to make sure I get into the 'right’ role–Overachiever or Too Cool For School, depending on the situation–and make that my default.
Has anyone else noticed themselves doing something similar? I’m wondering if there are other roles that I play, maybe more subtly, at work or with friends.
School essay: outsourcing some brain work
I'm currently writing an essay for one of my classes, 'Theoretical Foundations of Nursing.' I'm about the most 'gong-si' class I've ever taken. (That is a Chinese term for 'shit talking,' which is my boyfriend's favourite term for any field that gets into arguments over definitions, has concepts that don't correspond to any empirical phenomena, is based on ideology, etc.)
The essay involves analyzing a clinical situation (in this case a 55-year-old recently divorced, recently unemployed man, admitted to the psychiatric ward with major depression and suicidal ideation) using a theory (in this case, Roy's Adaptation Model). Done. The next step involves finding criticisms with the model...and despite the fact that I've been complaining about this class and its non-empirical nature all semester, I seem unable to come up with specific criticisms of what this nursing theory is missing.
Which is what I need your help for, because LessWrong is the best community ever when it comes to specific criticisms.
Here is a very brief overview of Roy's Adaptation Theory:
- Defines 'health' as 'state or process of becoming integrated with the environment, in the domains of survival, growth, reproduction, mastery, and personal/environmental transformation.'
- Defines a 'person' as an 'adaptive system with coping processes.' Goes on to subdivide this a bit: there are 'regulator mechanisms' (i.e. innate, not consciously controlled) and 'cognitive mechanisms' of adaptation within four different modes: physiological, role function, interdependence, and self-concept.
- Defines environment as 'all conditions, circumstances, and influences that affect the development and behavior of individuals and groups.' Further subdivides environmental stimuli into focal (which demand the person to immediately adapt), contextual (which affect how they adapt), and residual (i.e. attitudes, beliefs).
- The nurse's goal is to manipulate stimuli to improve the person's level of adaptation, as well as teaching more effective coping methods.
- The steps in the process of creating a care plan are: assessment of behavior, assessment of stimuli, choosing a nursing diagnosis from this huge lookup table, setting a goal, choosing an intervention, and evaluation the results.
Now my question is, what is a specific criticism I can make of this particular theory in general...not "your definitions aren't specific enough" or "the whole field of nursing theory isn't reductionist enough", but something that this kind of theory should have but doesn't. Any ideas?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)