Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
A specific bias that Lesswrongers may often get from fiction is the idea that power is proportional to difficulty. The more power something gives you, the harder it should be to get, right?
A mediocre student becomes a powerful mage through her terrible self-sacrifice and years of studying obscure scrolls. Even within the spells she can cast, the truly world-altering ones are those that demand the most laborious preparation, the most precise gestures, and the longest and most incomprehensible stream of syllables. A monk makes an arduous journey to ancient temples and learns secret techniques of spiritual oneness and/or martial asskickery, which require great dedication and self-knowledge. Otherwise, it would be cheating. The whole process of leveling up, of adding ever-increasing modifiers to die rolls, is based on the premise that power comes to those who do difficult things. And it's failsafe - no matter what you put your skill points in, you become better at something. It's a training montage, or a Hero's journey. As with other fictional evidence, these are not "just stories" -- they are powerful cultural narratives. This kind of narrative shapes moral choices and identity. So where do we see this reflected in less obviously fictional contexts?
There's the rags-to-riches story -- the immigrant who came with nothing, but by dint of hard work, now owns a business. University engineering programs are notoriously tough, because you are gaining the ability to do a lot of things (and for signalling reasons). A writer got to where she is today because she wrote and revised and submitted and revised draft after draft after draft.
In every case, there is assumed to be a direct causal link between difficulty and power. Here, these are loosely defined. Roughly, "power" means "ability to have your way", and "difficulty" is "amount of work & sacrifice required." These can be translated into units of social influence - a.k.a money -- and investment, a.k.a. time, or money. In many cases, power is set by supply and demand -- nobody needs a wizard if they can all cast their own spells, and a doctor can command much higher prices if they're the only one in town. The power of royalty or other birthright follows a similar pattern - it's not "difficult", but it is scarce -- only a very few people have it, and it's close to impossible for others to get it.
Each individual gets to choose what difficult things they will try to do. Some will have longer or shorter payoffs, but each choice will have some return. And since power (partly) depends on everybody else's choices, neoclassical economics says that individuals' choices collectively determine a single market rate for the return on difficulty. So anything you do that's difficult should have the same payoff.
Anything equally difficult should have equal payoff. Apparently. Clearly, this is not the world we live in. Admittedly, there were some pretty questionable assumptions along the way, but it's almost-kind-of-reasonable to conclude that, if you just generalize from the fictional evidence. (Consider RPGs: They're designed to be balanced. Leveling up any class will get you to advance in power at a more-or-less equal rate.)
So how does reality differ from this fictional evidence? One direction is trivial: it's easy to find examples where what's difficult is not particularly powerful.
Writing a book is hard, and has a respectable payoff (depending on the quality of the book, publicity, etc.). Writing a book without using the letter "e", where the main character speaks only in palindromes, while typing in the dark with only your toes on a computer that's rigged to randomly switch letters around is much much more difficult, but other than perhaps gathering a small but freakishly devoted fanbase, it does not bring any more power/influence than writing any other book. It may be a sign that you are capable of more difficult things, and somebody may notice this and give you power, but this is indirect and unreliable. Similarly, writing a game in machine code or as a set of instructions for a Turing machine is certainly difficult, but also pretty dumb, and has no significant payoff beyond writing the game in a higher-level language. [Edit - thanks to TsviBT: This is assuming there already is a compiler and relevant modules. If you are first to create all of these, there might be quite a lot of benefit.]
On the other hand, some things are powerful, but not particularly difficult. On a purely physical level, this includes operating heavy machinery, or piloting drones. (I'm sure it's not easy, but the power output is immense). Conceptually, I think calculus comes in this category. It can provide a lot of insight into a lot of disparate phenomena (producing utility and its bastard cousin, money), but is not too much work to learn.
As instrumental rationalists, this is the territory we want to be in. We want to beat the market rate for turning effort into influence. So how do we do this?
This is a big, difficult question. I think it's a useful way to frame many of the goals of instrumental rationality. What major should I study? Is this relationship worthwhile? (Note: This may, if poorly applied, turn you into a terrible person. Don't apply it poorly.) What should I do in my spare time?
These questions are tough. But the examples of powerful-but-easy stuff suggest a useful principle: make use of what already exists. Calculus is powerful, but was only easy to learn because I'd already been learning math for a decade. Bulldozers are powerful, and the effort to get this power is minimal if all you have to do is climb in and drive. It's not so worthwhile, though, if you have to derive a design from first principles, mine the ore, invent metallurgy, make all the parts, and secure an oil supply first.
Similarly, if you're already a writer, writing a new book may gain you more influence than learning plumbing. And so on. This begins to suggest that we should not be too hasty to judge past investments as sunk costs. Your starting point matters in trying to find the closest available power boost. And as with any messy real-world problem, luck plays a major role, too.
Of course, there will always be some correlation between power and difficulty -- it's not that the classical economic view is wrong, there's just other factors at play. But to gain influence, you should in general be prepared to do difficult things. However, they should not be arbitrary difficult things -- they should be in areas you have specifically identified as having potential.
To make this more concrete, think of Methods!Harry. He strategically invests a lot of effort, usually at pretty good ratios -- the Gringotts money pump scheme, the True Patronus, his mixing of magic and science, and Partial Transfiguration. Now that's some good fictional evidence.
 Any kind of fiction, but particularly fantasy, sci-fi, and neoclassical economics. All works of elegant beauty, with a more-or-less tenuous relationship to real life.
 Dehghani, M., Sachdeva, S., Ekhtiari, H., Gentner, D., Forbus, F. "The role of Cultural Narratives in Moral Decision Making." Proceedings of the 31th Annual Conference of the Cognitive Science Society. 2009.
I once asked a room full of about 100 neuroscientists whether willpower depletion was a thing, and there was widespread disagreement with the idea. (A propos, this is a great way to quickly gauge consensus in a field.) Basically, for a while some researchers believed that willpower depletion "is" glucose depletion in the prefrontal cortex, but some more recent experiments have failed to replicate this, e.g. by finding that the mere taste of sugar is enough to "replenish" willpower faster than the time it takes blood to move from the mouth to the brain:
Carbohydrate mouth-rinses activate dopaminergic pathways in the striatum–a region of the brain associated with responses to reward (Kringelbach, 2004)–whereas artificially-sweetened non-carbohydrate mouth-rinses do not (Chambers et al., 2009). Thus, the sensing of carbohydrates in the mouth appears to signal the possibility of reward (i.e., the future availability of additional energy), which could motivate rather than fuel physical effort.
-- Molden, D. C. et al, The Motivational versus Metabolic Effects of Carbohydrates on Self-Control. Psychological Science.
Stanford's Carol Dweck and Greg Walden even found that hinting to people that using willpower is energizing might actually make them less depletable:
When we had people read statements that reminded them of the power of willpower like, “Sometimes, working on a strenuous mental task can make you feel energized for further challenging activities,” they kept on working and performing well with no sign of depletion. They made half as many mistakes on a difficult cognitive task as people who read statements about limited willpower. In another study, they scored 15 percent better on I.Q. problems.
-- Dweck and Walden, Willpower: It’s in Your Head? New York Times.
While these are all interesting empirical findings, there’s a very similar phenomenon that’s much less debated and which could explain many of these observations, but I think gets too little popular attention in these discussions:
Willpower is distractible.
Indeed, willpower and working memory are both strongly mediated by the dorsolateral prefontal cortex, so “distraction” could just be the two functions funging against one another. To use the terms of Stanovich popularized by Kahneman in Thinking: Fast and Slow, "System 2" can only override so many "System 1" defaults at any given moment.
So what’s going on when people say "willpower depletion"? I’m not sure, but even if willpower depletion is not a thing, the following distracting phenomena clearly are:
- Physical fatigue (like from running)
- Physical discomfort (like from sitting)
- That specific-other-thing you want to do
- Anxiety about willpower depletion
- Indignation at being asked for too much by bosses, partners, or experimenters...
... and "willpower depletion" might be nothing more than mental distraction by one of these processes. Perhaps it really is better to think of willpower as power (a rate) than energy (a resource).
If that’s true, then figuring out what processes might be distracting us might be much more useful than saying “I’m out of willpower” and giving up. Maybe try having a sip of water or a bit of food if your diet permits it. Maybe try reading lying down to see if you get nap-ish. Maybe set a timer to remind you to call that friend you keep thinking about.
The last two bullets,
- Anxiety about willpower depletion
- Indignation at being asked for too much by bosses, partners, or experimenters...
are also enough to explain why being told willpower depletion isn’t a thing might reduce the effects typically attributed to it: we might simply be less distracted by anxiety or indignation about doing “too much” willpower-intensive work in a short period of time.
Of course, any speculation about how human minds work in general is prone to the "typical mind fallacy". Maybe my willpower is depletable and yours isn’t. But then that wouldn’t explain why you can cause people to exhibit less willpower depletion by suggesting otherwise. But then again, most published research findings are false. But then again the research on the DLPFC and working memory seems relatively old and well established, and distraction is clearly a thing...
All in all, more of my chips are falling on the hypothesis that willpower “depletion” is often just willpower distraction, and that finding and addressing those distractions is probably a better a strategy than avoiding activities altogether in order to "conserve willpower".
Followup to: Ask and Guess
Ask culture: "I'll be in town this weekend for a business trip. Is it cool if I crash at your place?" Response: “Yes“ or “no”.
Guess culture: "Hey, great news! I'll be in town this weekend for a business trip!" Response: Infer that they might be telling you this because they want something from you, conclude that they might want a place to stay, and offer your hospitality only if you want to. Otherwise, pretend you didn’t infer that.
The two basic rules of Ask Culture: 1) Ask when you want something. 2) Interpret things as requests and feel free to say "no".
The two basic rules of Guess Culture: 1) Ask for things if, and *only* if, you're confident the person will say "yes". 2) Interpret requests as expectations of "yes", and, when possible, avoid saying "no".
Both approaches come with costs and benefits. In the end, I feel pretty strongly that Ask is superior.
But these are not the only two possibilities!
"I'll be in town this weekend for a business trip. I would like to stay at your place, since it would save me the cost of a hotel, plus I would enjoy seeing you and expect we’d have some fun. I'm looking for other options, though, and would rather stay elsewhere than inconvenience you." Response: “I think I need some space this weekend. But I’d love to get a beer or something while you’re in town!” or “You should totally stay with me. I’m looking forward to it.”
There is a third alternative, and I think it's probably what rationalist communities ought to strive for. I call it "Tell Culture".
The two basic rules of Tell Culture: 1) Tell the other person what's going on in your own mind whenever you suspect you'd both benefit from them knowing. (Do NOT assume others will accurately model your mind without your help, or that it will even occur to them to ask you questions to eliminate their ignorance.) 2) Interpret things people tell you as attempts to create common knowledge for shared benefit, rather than as requests or as presumptions of compliance.
Suppose you’re in a conversation that you’re finding aversive, and you can’t figure out why. Your goal is to procure a rain check.
- Guess: *You see this annoyed body language? Huh? Look at it! If you don’t stop talking soon I swear I’ll start tapping my foot.* (Or, possibly, tell a little lie to excuse yourself. “Oh, look at the time…”)
- Ask: “Can we talk about this another time?”
- Tell: "I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."
Here are more examples from my own life:
- "I didn't sleep well last night and am feeling frazzled and irritable today. I apologize if I snap at you during this meeting. It isn’t personal."
- "I just realized this interaction will be far more productive if my brain has food. I think we should head toward the kitchen."
- "It would be awfully convenient networking for me to stick around for a bit after our meeting to talk with you and [the next person you're meeting with]. But on a scale of one to ten, it's only about 3 useful to me. If you'd rate the loss of utility for you as two or higher, then I have a strong preference for not sticking around."
The burden of honesty is even greater in Tell culture than in Ask culture. To a Guess culture person, I imagine much of the above sounds passive aggressive or manipulative, much worse than the rude bluntness of mere Ask. It’s because Guess people aren’t expecting relentless truth-telling, which is exactly what’s necessary here.
If you’re occasionally dishonest and tell people you want things you don't actually care about--like their comfort or convenience--they’ll learn not to trust you, and the inherent freedom of the system will be lost. They’ll learn that you only pretend to care about them to take advantage of their reciprocity instincts, when in fact you’ll count them as having defected if they respond by stating a preference for protecting their own interests.
Tell culture is cooperation with open source codes.
This kind of trust does not develop overnight. Here is the most useful Tell tactic I know of for developing that trust with a native Ask or Guess. It’s saved me sooooo much time and trouble, and I wish I’d thought of it earlier.
"I'm not asking because I expect you to say ‘yes’. I'm asking because I'm having trouble imagining the inside of your head, and I want to understand better. You are completely free to say ‘no’, or to tell me what you’re thinking right now, and I promise it will be fine." It is amazing how often people quickly stop looking shifty and say 'no' after this, or better yet begin to discuss further details.
There are things that are worthless-- that provide no value. There are also things that are worse than worthless-- things that provide negative value. I have found that people sometimes confuse the latter for the former, which can carry potentially dire consequences.
One simple example of this is in fencing. I once fenced with an opponent who put a bit of an unnecessary twirl on his blade when recovering from each parry. After our bout, one of the spectators pointed out that there wasn't any point to the twirls and that my opponent would improve by simply not doing them anymore. My opponent claimed that, even if the twirls were unnecessary, at worst they were merely an aesthetic preference that was useless but not actually harmful.
However, the observer explained that any unnecessary movement is harmful in fencing, because it spends time and energy that could be put to better use-- even if that use is just recovering a split second faster! 
During our bout, I indeed scored at least one touch because my opponent's twirling recovery was slower than a less flashy standard movement. That touch could well be the difference between victory and defeat; in a real sword fight, it could be the difference between life and death.
This isn't, of course, to say that everything unnecessary is damaging. There are many things that we can simply be indifferent towards. If I am about to go and fence a bout, the color of the shirt that I wear under my jacket is of no concern to me-- but if I had spent significant time before the bout debating over what shirt to wear instead of training, it would become a damaging detail rather than a meaningless one.
In other words, the real damage is dealt when something is not only unnecessary, but consumes resources that could instead be used for productive tasks. We see this relatively easily when it comes to matters of money, but when it comes to wastes of time and effort, many fail to make the inductive leap.
 Miyamoto Musashi agrees:
The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him. You must thoroughly research this.
[Beliefs about order of magnitude of Bitcoin's future value] --> [Beliefs about Bitcoin's future price] --> [Trading decisions]
Related: The Martial Art of Rationality
One principle in the martial arts is that arts that are practiced with aliveness tend to be more effective.
"Aliveness" in this case refers to a set of training principles focused on simulating conditions in an actual fight as closely as possible in training. Rather than train techniques in a vacuum or against a compliant opponent, alive training focuses on training with movement, timing, and energy under conditions that approximate those where the techniques will actually be used.
A good example of training that isn't alive would be methods that focused entirely on practicing kata and forms without making contact with other practitioners; a good example of training that is alive would be methods that focused on verifying the efficacy of techniques through full-contact engagement with other practitioners.
Aliveness tends to create an environment free from epistemic viciousness-- if your technique doesn't work, you'll know because you won't be able to use it against an opponent. Further, if your technique does work, you'll know that it works because you will have applied it against people trying to prevent you from doing so, and the added confidence will help you better apply that technique when you need it.
Evidence from martial arts competitions indicates that those who practice with aliveness are more effective than others. One of the chief reasons that Brazilian jiu-jitsu (BJJ) practitioners were so successful in early mixed martial arts tournaments was that BJJ-- a martial art that relies primarily on grappling and the use of submission holds and locks to defeat the opponent-- can be trained safely with almost complete aliveness, whereas many other martial arts cannot.
Now, this is not to say that one should only attempt to practice martial arts under completely realistic conditions. For instance, no martial arts school that I am aware of randomly ambushes or attempts to mug its students on the streets outside of class in order to test how they would respond under truly realistic conditions.
Even in the age of sword duels, people would train with blunt weapons and protective armor rather than sharp weapons and ordinary clothes. Would training with sharp weapons and ordinary clothes be more alive than training with blunt weapons and protective armor? Certainly, but the trainees wouldn't be! And yet training with blunt weapons is still useful-- the fact that training does not fully approximate realistic conditions does not intrinsically mean it is bad.
That being said, generally speaking martial arts training that is more alive-- that better approximates realistic fighting conditions-- is more effective within reasonable safety margins. There is a growing consensus among students of martial arts who are looking for effective self-defense techniques that the specific martial art one practices is not hugely relevant, and that what matters more is the extent to which the training does or doesn't use aliveness.
Aliveness and Rationality
So, that's all well and good-- but how can we apply these principles to rationality practice?
While martial arts training has very clear methods of measuring whether or not skills work (can I apply this technique against a resisting opponent?), rationality training is much murkier-- measuring rationality skills is a nontrivial problem.
Further, under normal circumstances the opponent that you are resisting when applying rationality techniques is your own brain, not an external enemy. This makes applying appropriate levels of resistance in training difficult, because it's very easy to cheat yourself. The best method that I have found thus far is lucid dreaming, as forcing your dreaming brain to recognize its true state through the various hallucinations and constructed memories associated with dreaming is no easy task.
That being said, I make no claims to special or unique knowledge in this area. If anyone has suggestions for useful methods of "live" rationality practice, I'd love to hear them.
 For further explanation, see Matt Thornton's classic video "Why Aliveness?"
 If your plan is to choke someone until they fall unconscious, it is possible to safely train for this with nearly complete aliveness by wrestling against an opponent and simply releasing the chokehold before they actually fall unconscious. By contrast, it is much harder to safely train to punch someone into unconsciousness, and harder still to safely train to break people's necks.
 The game of Assassins does do this, but usually follows rules that are constrained enough to make it a suboptimal method of training.
 There are some contexts in which rationality techniques are applied in order to overcome an external enemy. Competitive games and some sports are a good method of finding practice in this respect. For instance, in order to be a competitive Magic: The Gathering player, you need to engage many epistemic and instrumental rationality skills. Competitive poker can offer similar development.
Note: this post is no longer endorsed by the author, for reasons partially described here.
In the spirit of radioing back to describe a path:
The truly absurd thing about dreams lies not with their content, but with the fact that we believe them. Perfectly outrageous and impossible things can occur in dreams without the slightest hesitance to accept them on the part of the dreamer. I have often dreamed myself into bizarre situations that come complete with constructed memories explaining how they secretly make sense!
However, sometimes we break free from these illusions and become aware of the fact that we are dreaming. This is known as lucid dreaming and can be an extremely pleasant experience. Unfortunately, relatively few people experience lucid dreams "naturally;" fortunately, lucid dreaming is also a skill, and like any other skill it can be trained.
While this is all very interesting, you may be wondering what it has to do with rationality. Simply put, I have found lucid dreaming perhaps the best training currently available when it comes to increasing general rationality skills. It is one thing to notice when you are confused by ordinary misunderstandings or tricks; it is another to notice while your own brain is actively constructing memories and environments to fool you!
I've been involved in lucid dreaming for about eight years now and teaching lucid dreaming for two, so I'm pretty familiar with it on a non-surface level. I've also been explicitly looking into the prospect of using lucid dreaming for rationality training purposes since 2010, and I'm fairly confident that it will prove useful for at least some people here.
If you can get yourself to the point where you can consistently induce lucid dreaming by noticing the inconsistencies and absurdities of your dream state, I predict that you will become a much stronger rationalist in the process. If my prediction is correct, lucid dreaming allows you to hone rationality skills while also having fun, and best of all permits you to do this in your sleep!
If this sounds appealing to you, perhaps the most concise and efficient resource for learning lucid dreaming is the book Lucid Dreaming, by Dr. Stephen LaBerge. However, this is a book and costs money. If you're not into that, a somewhat less efficient but much more comprehensive view of lucid dreaming can be found on the website dreamviews.com. I further recommend that anyone interested in this check out the Facebook group Rational Dreamers. Recently founded by LW user BrienneStrohl, this group provides an opportunity to discuss lucid dreaming and related matters in an environment free from some of the mysticism and confusion that otherwise surrounds this issue.
All in all, it seems that lucid dreaming may offer a method of training your rationality in a way that is fun, interesting, and takes essentially none of your waking hours. Thus, if you are interested in increasing your general rationality, I strongly recommend investigating lucid dreaming. To be frank, my main concern about lucid dreaming as a rationality practice is simply that it seems too good to be true.
 Note that this is only one of many ways of inducing lucid dreaming. However, most other techniques that I have tried are not necessarily useful forms of rationality practice, effective as they might be.
 And, to be honest, "fun" is an understatement.
Since it had a decent amount of traffic until a good two weeks into September (and I thought it was a good idea), I'm reviving this thread.
In an attempt to encourage more people to actually do awesome things (a la instrumental rationality), I am proposing a new monthly thread (can be changed to bi-weekly, should that be demanded). Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of you self as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.
Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread.This thread is solely for people to talk about the awesomest thing they've done all month. not will do. not are working on. have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.
So, what's the coolest thing you've done this month?
You've had those moments -- the ones where you're very aware of where you're at in the world, and you're mapping out your future and plans very smartly, and you're feeling great about taking action and pushing important things forwards.
I used to find myself only reaching that place, at random, once or twice per year.
But every time I did, I would spend just a few hours sketching out plans, thinking about my priorities, discarding old things I used to do that didn't bring much value, and pushing my limits to do new worthwhile things. I thought, "This is really valuable. I should do this more often."
Eventually, I named that state: Reflective Control.
As often happens, by naming something it becomes easier to do it more often.
At this time, I still had a hazy poorly working feeling about what it was. So I tried to define it. After many attempts, I came to this:
> Reflective Control is when you're firmly off autopilot, in a high-positive and high-willpower state, and are able to take action.
You'll note there's four discreet components to it: firmly off autopilot (reflective), high positivity, high will, and cable of and oriented towards taking action.
I also asked myself, "How to know if you're in Reflective Control?"
My best answer of an exercise for it is,
> You set aside the impulses/distractions, and try to set a concrete Control-related goal. This is meta-work, meaning the process of defining your life and what needs to happen next. You do this calmly. By setting a concrete Control-related goal successfully and then executing on it, you know you're in an RC state.
> Example: "I will identify all the open projects I've got, and the next steps for each of them."
With that definition and that exercise in hand, I was able to do something which works almost magically when I wanted to take on big challenges: I could rate myself from 1-100 on the four key elements of the component, and then set a concrete goal to achieve, and analyze a little about which factor might be holding me back. Here is an example from my journal:
> Reflective 70/100, positive 70/100, will 65/100, action 40/100… ok, I'm feeling good once a good, just some anxiety suppressing will a little and action quite a bit, but no problem. My goal is to finish the xxx outline before I leave here.
I've found this incredibly useful. Summary:
*There's a state I call "Reflective Control" where I'm off autopilot and thinking (reflective), in a positive mood, with willpower and action-oriented.
*I can put explicit numbers on this, somewhat subjectively, from 1-100. This lets me see where the link in the chain is, if any.
*By setting a concrete goal and working towards it, you can get more objective feedback and balance whichever element is lowest with some practical actions.
Related to: The Good News of Situationist Psychology
Perhaps the most significant teaching social psychology has to offer is that most of our behaviors are determined by situational factors inherent to our settings, not by our personal qualities.
Some consider this depressing-- for instance, the Milgram experiments in obedience to authority and Stanford prison experiment are often cited as examples of how settings can cause otherwise-good people to participate in and even support unethical and dangerous behavior. However, as lukeprog points out in The Good News of Situationist Psychology, this principle can also be considered uplifting. After all, if our settings have such an effect on our behavior, they are thus a powerful tool that we can employ to make ourselves more effective.
Changing Your Physical Settings
One relatively easy place to start making such changes is in your personal life. I have found that great productivity increases can be gained through relatively minor changes in lifestyle-- or even seemingly-trivial matters such as the position of physical (or sometimes digital) objects in your environment!
For instance, I recently noticed a tendency in myself to "wake up" and then waste the next twenty or thirty minutes aimlessly browsing the Internet on my laptop in bed before actually getting up and eating breakfast, showering, going to work, etc. Since I value time, especially morning time, substantially, I decided that action should be taken to avoid this.
At first, I figured that once I had noticed the problem I could simply apply willpower and avoid it, but this proved less than effective-- it turns out that my willpower is not at its strongest when I first wake up and am still a little groggy! I then decided to apply the principles of situational psychology to the situation. The most obvious setting contributing to the problem was that I was using an alarm app on my computer to wake up in the morning, and turning off this alarm caused me to interact with the computer.
So I picked up an IKEA alarm clock, turned off my alarm app, and moved my computer to the kitchen instead of my room-- problem solved. In my new settings, browsing in bed was outright ridiculous-- I'd have to wake up, go downstairs to the kitchen, pick up my computer, and bring it back up to my room with me. Not a likely course of events!
Changing Your Mental Settings
While physical environments can certainly produce changes in behavior, social and intellectual environments can too.
For instance, one of my friends from undergrad took an interesting approach when choosing what major to take. He knew that he wanted a solid private-sector income that would allow him to support a family, but didn't particularly care what field it was in. Overall, he wanted to ensure that whatever major he chose would have the highest possible chance of getting him a good job without unusual effort or circumstances.
Therefore, during winter term of his sophomore year, prior to declaring, he went around to all the seniors he could get to talk to him and asked them what their major was, what they were doing post-graduation, and how much money they anticipated making. He found that the CS majors tended to have more private-sector job prospects and higher average starting salaries than students in other fields, so he decided to declare a CS major.
While I don't think my friend's approach is necessarily the best possible option for determining what to do with your life, it certainly beats the sort of unstructured guessing that I've seen many others do. By considering academic majors as settings and examining what setting produced the best result on average, my friend managed to find a field and career that he's by all indications quite happy in-- and with a minimal amount of risk and stress involved.
Human psychology is greatly influenced by situational factors, and in more ways than a naive reasoner might expect. If you're looking to improve your life across any particular axis, one good way to start is by examining your current physical, social, and intellectual settings and paying close attention to how changes in those settings might help accomplish your goals.
 If you don't believe that this is true, I advise simulating that you do and going on anyway. I find this method effective enough for me and others and easy enough to implement that it seems well worth testing, even if you don't fully believe in the claims behind it. At worst, it might become a potential epistemic/instrumental tradeoff.
 See for instance Joseph Heath and Joel Anderson, Procrastination and the Extended Will (2009).
 In the course of researching and writing this post, I encountered some objections to the resource expenditure theory of willpower (many of which have already been summarized here by Jess_Riedel). I believe my beliefs regarding willpower loss while tired/just awakening may be limiting in the same sense that believing willpower is a limited resource appears limiting, but have yet to test at the time of this writing.
 If you're interested in seeing other examples of ways in which we can structure the physical objects around us in order to become more productive, you may wish to check out Alicorn's How to Have Things Correctly and fowlertm's related How to Have Space Correctly. Several of Alyssa Vance's Random Life Tips also relate to this matter.
 The friend in question is now employed as a software engineer at a tech company and by all indications loves his job. Note though that this post isn't saying "you should be a CS major." Things change over time, and what was a good choice for one person and one time may not be a good choice for another person or another time.
View more: Next