Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Tell Culture

89 BrienneStrohl 18 January 2014 08:13PM

Followup to: Ask and Guess

Ask culture: "I'll be in town this weekend for a business trip. Is it cool if I crash at your place?" Response: “Yes“ or “no”.

Guess culture: "Hey, great news! I'll be in town this weekend for a business trip!" Response: Infer that they might be telling you this because they want something from you, conclude that they might want a place to stay, and offer your hospitality only if you want to. Otherwise, pretend you didn’t infer that.

The two basic rules of Ask Culture: 1) Ask when you want something. 2) Interpret things as requests and feel free to say "no".

The two basic rules of Guess Culture: 1) Ask for things if, and *only* if, you're confident the person will say "yes". 2) Interpret requests as expectations of "yes", and, when possible, avoid saying "no".

Both approaches come with costs and benefits. In the end, I feel pretty strongly that Ask is superior. 

But these are not the only two possibilities!

"I'll be in town this weekend for a business trip. I would like to stay at your place, since it would save me the cost of a hotel, plus I would enjoy seeing you and expect we’d have some fun. I'm looking for other options, though, and would rather stay elsewhere than inconvenience you." Response: “I think I need some space this weekend. But I’d love to get a beer or something while you’re in town!” or “You should totally stay with me. I’m looking forward to it.”

There is a third alternative, and I think it's probably what rationalist communities ought to strive for. I call it "Tell Culture".

The two basic rules of Tell Culture: 1) Tell the other person what's going on in your own mind whenever you suspect you'd both benefit from them knowing. (Do NOT assume others will accurately model your mind without your help, or that it will even occur to them to ask you questions to eliminate their ignorance.) 2) Interpret things people tell you as attempts to create common knowledge for shared benefit, rather than as requests or as presumptions of compliance.

Suppose you’re in a conversation that you’re finding aversive, and you can’t figure out why. Your goal is to procure a rain check.

  • Guess: *You see this annoyed body language? Huh? Look at it! If you don’t stop talking soon I swear I’ll start tapping my foot.* (Or, possibly, tell a little lie to excuse yourself. “Oh, look at the time…”) 
  • Ask: “Can we talk about this another time?”
  • Tell: "I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."

Here are more examples from my own life:

  • "I didn't sleep well last night and am feeling frazzled and irritable today. I apologize if I snap at you during this meeting. It isn’t personal." 
  • "I just realized this interaction will be far more productive if my brain has food. I think we should head toward the kitchen." 
  • "It would be awfully convenient networking for me to stick around for a bit after our meeting to talk with you and [the next person you're meeting with]. But on a scale of one to ten, it's only about 3 useful to me. If you'd rate the loss of utility for you as two or higher, then I have a strong preference for not sticking around." 

The burden of honesty is even greater in Tell culture than in Ask culture. To a Guess culture person, I imagine much of the above sounds passive aggressive or manipulative, much worse than the rude bluntness of mere Ask. It’s because Guess people aren’t expecting relentless truth-telling, which is exactly what’s necessary here.

If you’re occasionally dishonest and tell people you want things you don't actually care about--like their comfort or convenience--they’ll learn not to trust you, and the inherent freedom of the system will be lost. They’ll learn that you only pretend to care about them to take advantage of their reciprocity instincts, when in fact you’ll count them as having defected if they respond by stating a preference for protecting their own interests.

Tell culture is cooperation with open source codes.

This kind of trust does not develop overnight. Here is the most useful Tell tactic I know of for developing that trust with a native Ask or Guess. It’s saved me sooooo much time and trouble, and I wish I’d thought of it earlier.

"I'm not asking because I expect you to say ‘yes’. I'm asking because I'm having trouble imagining the inside of your head, and I want to understand better. You are completely free to say ‘no’, or to tell me what you’re thinking right now, and I promise it will be fine." It is amazing how often people quickly stop looking shifty and say 'no' after this, or better yet begin to discuss further details.

Worse than Worthless

14 katydee 30 December 2013 01:47AM

There are things that are worthless-- that provide no value. There are also things that are worse than worthless-- things that provide negative value. I have found that people sometimes confuse the latter for the former, which can carry potentially dire consequences.

One simple example of this is in fencing. I once fenced with an opponent who put a bit of an unnecessary twirl on his blade when recovering from each parry. After our bout, one of the spectators pointed out that there wasn't any point to the twirls and that my opponent would improve by simply not doing them anymore. My opponent claimed that, even if the twirls were unnecessary, at worst they were merely an aesthetic preference that was useless but not actually harmful.

However, the observer explained that any unnecessary movement is harmful in fencing, because it spends time and energy that could be put to better use-- even if that use is just recovering a split second faster! [1]

During our bout, I indeed scored at least one touch because my opponent's twirling recovery was slower than a less flashy standard movement. That touch could well be the difference between victory and defeat; in a real sword fight, it could be the difference between life and death.

This isn't, of course, to say that everything unnecessary is damaging. There are many things that we can simply be indifferent towards. If I am about to go and fence a bout, the color of the shirt that I wear under my jacket is of no concern to me-- but if I had spent significant time before the bout debating over what shirt to wear instead of training, it would become a damaging detail rather than a meaningless one.

In other words, the real damage is dealt when something is not only unnecessary, but consumes resources that could instead be used for productive tasks. We see this relatively easily when it comes to matters of money, but when it comes to wastes of time and effort, many fail to make the inductive leap.

 

[1] Miyamoto Musashi agrees:

The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him. You must thoroughly research this.

A proposed inefficiency in the Bitcoin markets

3 Liron 27 December 2013 03:48AM
Salviati: Simplicio, do you think the Bitcoin markets are efficient?

Simplicio: If you'd asked me two years ago, I would have said yes. I know hindsight is 20/20, but even at the time, I think the fact that relatively few people were trading it would have risen to prominence in my analysis.

Salviati: And what about today?

Simplicio: Today, it seems like there's no shortage of trading volume. The hedge funds of the world have heard of Bitcoin, and had their quants do their fancy analyses on it, and they actively trade it.

Salviati: Well, I'm certainly not a quant, but I think I've spotted a systematic market inefficiency. Would you like to hear it?

Simplicio: Nah, I'm good.

Salviati: Did you hear what I said? I think I've spotted an exploitable pattern of price movements in a $10 Billion market. If I'm right, it could make us a lot of money.

Simplicio: Sure, but you won't convince me that whatever pattern you're thinking of is a "reliable" one.

Salviati: Come on, you don't even know what my argument is.

Simplicio: But I know how your argument is going to be structured. First you're going to identify some property of Bitcoin prices in past data. Then you'll explain some causal model you have which supposedly accounts for why prices have had that property in the past. Then you'll say that your model will continue to account for that same property in future Bitcoin prices.

Salviati: Yeah, so? What's wrong with that?

Simplicio: The problem is that you are not a trained quant, and therefore, your brain is not capable of bringing a worthwhile property of Bitcoin prices to your attention.

Salviati: Dude, I just want to let you know because this happens often and no one else is ever going to say anything: you're being a dick.

Simplicio: Look, quants are good at their job. To a first approximation, quants are like perfect Bayesian reasoners who maintain a probability distribution over the "reliability" of every single property of Bitcoin prices that you and I are capable of formulating. So this argument you're going to make to me, a quant has already made to another quant, and the other quant has incorporated it into his hedge fund's trading algorithms.

Salviati: Fine, but so what if quants have already figured out my argument for themselves? We can make money on it too.

Simplicio: No, we can't. I told you I'm pretty confident that the market is efficient, i.e. anti-inductive, meaning the quants of the world haven't left behind any reliable patterns that an armchair investor like you can detect and profit from.

Salviati: Would you just shut up and let me say my argument?

Simplicio: Whatever, knock yourself out.

Salviati: Ok, here goes. Everyone knows Bitcoin prices are volatile, right?

Simplicio: Yeah, highly volatile. But at any given moment, you don't know if the volatility is going to move the price up or down next. From your state of knowledge, it looks like a random walk. If today's Bitcoin price is $1000, then tomorrow's price is as likely to be $900 as it is to be $1100.

Salviati: I agree that the Random Walk Hypothesis provides a good model of prices in efficient markets, and that the size of a each step in a random walk provides a good model of price volatility in efficient markets.

Simplicio: See, I told you you wouldn't convince me.

Salviati: Ah, but my empirical observation of Bitcoin prices is inconsistent with the Random Walk hypothesis. So the only thing I'm led to conclude is that the Bitcoin market is not efficient.

Simplicio: What do you mean "inconsistent"?

Salviati: I mean Bitcoin's past prices don't look much like a random walk. They look more like a random walk on a log scale. If today's price is $1000, then tomorrow's price is equally likely to be $900 or $1111. So if I buy $1000 of Bitcoin today, I expect to have 0.5($900) + 0.5($1111) = $1005.50 tomorrow.

Simplicio: How do you know that? Did you write a script to loop through Bitcoin's daily closing price on Mt. Gox and simulate the behavior of a Bayesian reasoner with a variable-step-size random-walk prior and a second Bayesian reasoner with a variable-step-size log-random-walk prior, and thus calculate a much higher Bayesian Score for the log-random-walk model?

Salviati: Yeah, I did.

Simplicio: That's very virtuous of you.

[This is a fictional dialogue. The truth is, I was too lazy to do that. Can someone please do that? I would much appreciate it. --Liron.]

Salviati: So, have I convinced you that the market is anti-inductive now?

Simplicio: Well, you've empirically demonstrated that the log Random Walk Hypothesis was a good model for predicting Bitcoin prices in the past. But that's just a historical pattern. My original point was that you're not qualified to evaluate which historical patterns are *reliable* patterns. The Bitcoin markets are full of pattern-annihilating forces, and you're not qualified to evaluate which past-data-fitting models are eligible for future-data-fitting.

Salviati: Ok, I'm not saying you have to believe that the future accuracy of log-Random-Walk will probably be higher than the future accuracy of linear Random Walk. I'm just saying you should perform a Bayesian update in the direction of that conclusion.

Simplicio: Ok, but the only reason the update has nonzero strength is because I assigned an a-priori chance of 10% to the set of possible worlds wherein Bitcoin markets were inefficient, and that set of possible worlds gives a higher probability that a model like your log-Random-Walk model would fit the price data well. So I update my beliefs to promote the hypothesis that Bitcoin is inefficient, and in particular that it is inefficient in a log-Random-Walk way.

Salviati: Thanks. And hey, guess what: I think I've traced the source of the log-Random-Walk regularity.

Simplicio: I'm surprised you waited this long to mention that.

Salviati: I figured that if I mentioned it earlier, you'd snap back about how efficient markets sever the causal connection between would-be price-regularity-causing dynamics, and actual prices.

Simplicio: Fair enough.

Salviati: Anyway, the reason Bitcoin prices follow a log-Random-Walk is because they reflect the long-term Expected Value of Bitcoin's actual utility.

Simplicio: Bitcoin has no real utility.

Salviati: It does. It's liquid in novel, qualitatively different ways. It's kind of anonymous. It's a more stable unit of account than the official currencies of some countries.

Simplicio: Come on, how much utility is all that really worth in expectation?

Salviati: I don't know. The Bitcoin economy could be anywhere from hundreds of millions of dollars, to trillions of dollars. Our belief about the long-term future value of a single BTC is spread out across a range whose 90% confidence interval is something like [$10, $100,000] for 1BTC.

Simplicio: Are you saying it's spread out over the interval [$10, $100,000] in a uniform distribution?

Salviati: Nope, it's closer to a bell curve centered at $1000 on a log scale. It gives equal probability of ~10% both to the $10-100 range and to the $10,000-100,000 range.

Simplicio: How do you know that everyone's beliefs are shaped like that?

Salviati: Because everyone has a causal model in their head with a node for "order of magnitude of Bitcoin's value", and that node varies in the characteristically linear fashion of a Bayes net.

Simplicio: I don't feel confident in that explanation.

Salviati: Then take whatever explanation you give yourself to explain the effectiveness of Fermi estimates. Those output a bell curve on a log scale too, and seems like estimating Bitcoin's future value should have a lot of methodology in common with doing back-of-the-envelope calculations about the blast radius of a nuclear bomb.

Simplicio: Alright.

Salviati: So the causality of Bitcoin prices roughly looks like this:

[Beliefs about order of magnitude of Bitcoin's future value] --> [Beliefs about Bitcoin's future price] --> [Trading decisions]

Simplicio: Okay, I see how the first node can fluctuate a lot in reaction to daily news events, and that would have a disproportionately high effect on the last node. But how can an efficient market avoid that kind of log-scale fluctuation? Efficient markets always reflect a consensus estimate of an asset's price, and it's rational to arrive at an estimate that fluctuates on a log scale!

Salviati: Actually, I think a truly efficient market shouldn't just skip around across orders of magnitudes, just because expectations of future prices do. I think truly efficient markets show some degree of "drag", which should be invisible in typical cases like publicly-traded stocks, but become noticeable in cases of order-of-magnitude value-uncertainty like Bitcoin.

Simplicio: So you think you're the only one smart enough to notice that it's worth trading Bitcoin so as to create drag on Bitcoin's log-scale random walk?

Salviati: Yeah, I think maybe I am.


Salviati is claiming that his empirical observations show a lack of drag on Bitcoin price shifts, which would be actionable evidence of inefficiency. Discuss.

Aliveness in Training

9 katydee 31 October 2013 01:17AM

Related: The Martial Art of Rationality

One principle in the martial arts is that arts that are practiced with aliveness tend to be more effective.

"Aliveness" in this case refers to a set of training principles focused on simulating conditions in an actual fight as closely as possible in training. Rather than train techniques in a vacuum or against a compliant opponent, alive training focuses on training with movement, timing, and energy under conditions that approximate those where the techniques will actually be used.[1]

A good example of training that isn't alive would be methods that focused entirely on practicing kata and forms without making contact with other practitioners; a good example of training that is alive would be methods that focused on verifying the efficacy of techniques through full-contact engagement with other practitioners.

Aliveness tends to create an environment free from epistemic viciousness-- if your technique doesn't work, you'll know because you won't be able to use it against an opponent. Further, if your technique does work, you'll know that it works because you will have applied it against people trying to prevent you from doing so, and the added confidence will help you better apply that technique when you need it.

Evidence from martial arts competitions indicates that those who practice with aliveness are more effective than others. One of the chief reasons that Brazilian jiu-jitsu (BJJ) practitioners were so successful in early mixed martial arts tournaments was that BJJ-- a martial art that relies primarily on grappling and the use of submission holds and locks to defeat the opponent-- can be trained safely with almost complete aliveness, whereas many other martial arts cannot.[2]

Now, this is not to say that one should only attempt to practice martial arts under completely realistic conditions. For instance, no martial arts school that I am aware of randomly ambushes or attempts to mug its students on the streets outside of class in order to test how they would respond under truly realistic conditions.[3]

Even in the age of sword duels, people would train with blunt weapons and protective armor rather than sharp weapons and ordinary clothes. Would training with sharp weapons and ordinary clothes be more alive than training with blunt weapons and protective armor? Certainly, but the trainees wouldn't be! And yet training with blunt weapons is still useful-- the fact that training does not fully approximate realistic conditions does not intrinsically mean it is bad.

That being said, generally speaking martial arts training that is more alive-- that better approximates realistic fighting conditions-- is more effective within reasonable safety margins. There is a growing consensus among students of martial arts who are looking for effective self-defense techniques that the specific martial art one practices is not hugely relevant, and that what matters more is the extent to which the training does or doesn't use aliveness.

 

Aliveness and Rationality

So, that's all well and good-- but how can we apply these principles to rationality practice?

While martial arts training has very clear methods of measuring whether or not skills work (can I apply this technique against a resisting opponent?), rationality training is much murkier-- measuring rationality skills is a nontrivial problem.

Further, under normal circumstances the opponent that you are resisting when applying rationality techniques is your own brain, not an external enemy.[4] This makes applying appropriate levels of resistance in training difficult, because it's very easy to cheat yourself. The best method that I have found thus far is lucid dreaming, as forcing your dreaming brain to recognize its true state through the various hallucinations and constructed memories associated with dreaming is no easy task.

That being said, I make no claims to special or unique knowledge in this area. If anyone has suggestions for useful methods of "live" rationality practice, I'd love to hear them.

 

 

[1] For further explanation, see Matt Thornton's classic video "Why Aliveness?"

[2] If your plan is to choke someone until they fall unconscious, it is possible to safely train for this with nearly complete aliveness by wrestling against an opponent and simply releasing the chokehold before they actually fall unconscious. By contrast, it is much harder to safely train to punch someone into unconsciousness, and harder still to safely train to break people's necks.

[3] The game of Assassins does do this, but usually follows rules that are constrained enough to make it a suboptimal method of training.

[4] There are some contexts in which rationality techniques are applied in order to overcome an external enemy. Competitive games and some sports are a good method of finding practice in this respect. For instance, in order to be a competitive Magic: The Gathering player, you need to engage many epistemic and instrumental rationality skills. Competitive poker can offer similar development.

Better Rationality Through Lucid Dreaming

10 katydee 18 October 2013 08:48PM

In the spirit of radioing back to describe a path:

The truly absurd thing about dreams lies not with their content, but with the fact that we believe them. Perfectly outrageous and impossible things can occur in dreams without the slightest hesitance to accept them on the part of the dreamer. I have often dreamed myself into bizarre situations that come complete with constructed memories explaining how they secretly make sense!

However, sometimes we break free from these illusions and become aware of the fact that we are dreaming. This is known as lucid dreaming and can be an extremely pleasant experience. Unfortunately, relatively few people experience lucid dreams "naturally;" fortunately, lucid dreaming is also a skill, and like any other skill it can be trained.

While this is all very interesting, you may be wondering what it has to do with rationality. Simply put, I have found lucid dreaming perhaps the best training currently available when it comes to increasing general rationality skills. It is one thing to notice when you are confused by ordinary misunderstandings or tricks; it is another to notice while your own brain is actively constructing memories and environments to fool you!

I've been involved in lucid dreaming for about eight years now and teaching lucid dreaming for two, so I'm pretty familiar with it on a non-surface level. I've also been explicitly looking into the prospect of using lucid dreaming for rationality training purposes since 2010, and I'm fairly confident that it will prove useful for at least some people here.

If you can get yourself to the point where you can consistently induce lucid dreaming by noticing the inconsistencies and absurdities of your dream state,[1] I predict that you will become a much stronger rationalist in the process. If my prediction is correct, lucid dreaming allows you to hone rationality skills while also having fun, and best of all permits you to do this in your sleep!

If this sounds appealing to you, perhaps the most concise and efficient resource for learning lucid dreaming is the book Lucid Dreaming, by Dr. Stephen LaBerge. However, this is a book and costs money. If you're not into that, a somewhat less efficient but much more comprehensive view of lucid dreaming can be found on the website dreamviews.com. I further recommend that anyone interested in this check out the Facebook group Rational Dreamers. Recently founded by LW user BrienneStrohl, this group provides an opportunity to discuss lucid dreaming and related matters in an environment free from some of the mysticism and confusion that otherwise surrounds this issue.

All in all, it seems that lucid dreaming may offer a method of training your rationality in a way that is fun,[2] interesting, and takes essentially none of your waking hours. Thus, if you are interested in increasing your general rationality, I strongly recommend investigating lucid dreaming. To be frank, my main concern about lucid dreaming as a rationality practice is simply that it seems too good to be true.

 

[1] Note that this is only one of many ways of inducing lucid dreaming. However, most other techniques that I have tried are not necessarily useful forms of rationality practice, effective as they might be.

[2] And, to be honest, "fun" is an understatement.

October Monthly Bragging Thread

10 linkhyrule5 04 October 2013 07:06AM

Since it had a decent amount of traffic until a good two weeks into September (and I thought it was a good idea), I'm reviving this thread.

Joshua_Blaine:

In an attempt to encourage more people to actually do awesome things (a la instrumental rationality), I am proposing a new monthly thread (can be changed to bi-weekly, should that be demanded). Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of you self as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.

Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread.This thread is solely for people to talk about the awesomest thing they've done all month. not will do. not are working on. have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.

So, what's the coolest thing you've done this month?

Reflective Control

11 lionhearted 02 September 2013 05:45PM

You've had those moments -- the ones where you're very aware of where you're at in the world, and you're mapping out your future and plans very smartly, and you're feeling great about taking action and pushing important things forwards.

I used to find myself only reaching that place, at random, once or twice per year.

But every time I did, I would spend just a few hours sketching out plans, thinking about my priorities, discarding old things I used to do that didn't bring much value, and pushing my limits to do new worthwhile things. I thought, "This is really valuable. I should do this more often."

Eventually, I named that state: Reflective Control.

As often happens, by naming something it becomes easier to do it more often.

At this time, I still had a hazy poorly working feeling about what it was. So I tried to define it. After many attempts, I came to this:

> Reflective Control is when you're firmly off autopilot, in a high-positive and high-willpower state, and are able to take action.

You'll note there's four discreet components to it: firmly off autopilot (reflective), high positivity, high will, and cable of and oriented towards taking action.

I also asked myself, "How to know if you're in Reflective Control?"

My best answer of an exercise for it is,

> You set aside the impulses/distractions, and try to set a concrete Control-related goal. This is meta-work, meaning the process of defining your life and what needs to happen next. You do this calmly. By setting a concrete Control-related goal successfully and then executing on it, you know you're in an RC state.

> Example: "I will identify all the open projects I've got, and the next steps for each of them."

 

With that definition and that exercise in hand, I was able to do something which works almost magically when I wanted to take on big challenges: I could rate myself from 1-100 on the four key elements of the component, and then set a concrete goal to achieve, and analyze a little about which factor might be holding me back. Here is an example from my journal:

> Reflective 70/100, positive 70/100, will 65/100, action 40/100… ok, I'm feeling good once a good, just some anxiety suppressing will a little and action quite a bit, but no problem. My goal is to finish the xxx outline before I leave here.

I've found this incredibly useful. Summary:

*There's a state I call "Reflective Control" where I'm off autopilot and thinking (reflective), in a positive mood, with willpower and action-oriented.

*I can put explicit numbers on this, somewhat subjectively, from 1-100. This lets me see where the link in the chain is, if any.

*By setting a concrete goal and working towards it, you can get more objective feedback and balance whichever element is lowest with some practical actions.

Optimize Your Settings

14 katydee 29 July 2013 09:10PM

Related to: The Good News of Situationist Psychology

Perhaps the most significant teaching social psychology has to offer is that most of our behaviors are determined by situational factors inherent to our settings, not by our personal qualities.[1]

Some consider this depressing-- for instance, the Milgram experiments in obedience to authority and Stanford prison experiment are often cited as examples of how settings can cause otherwise-good people to participate in and even support unethical and dangerous behavior. However, as lukeprog points out in The Good News of Situationist Psychology, this principle can also be considered uplifting. After all, if our settings have such an effect on our behavior, they are thus a powerful tool that we can employ to make ourselves more effective.[2]

 

Changing Your Physical Settings

One relatively easy place to start making such changes is in your personal life. I have found that great productivity increases can be gained through relatively minor changes in lifestyle-- or even seemingly-trivial matters such as the position of physical (or sometimes digital) objects in your environment!

For instance, I recently noticed a tendency in myself to "wake up" and then waste the next twenty or thirty minutes aimlessly browsing the Internet on my laptop in bed before actually getting up and eating breakfast, showering, going to work, etc. Since I value time, especially morning time, substantially, I decided that action should be taken to avoid this.

At first, I figured that once I had noticed the problem I could simply apply willpower and avoid it, but this proved less than effective-- it turns out that my willpower is not at its strongest when I first wake up and am still a little groggy![3] I then decided to apply the principles of situational psychology to the situation. The most obvious setting contributing to the problem was that I was using an alarm app on my computer to wake up in the morning, and turning off this alarm caused me to interact with the computer.

So I picked up an IKEA alarm clock, turned off my alarm app, and moved my computer to the kitchen instead of my room-- problem solved. In my new settings, browsing in bed was outright ridiculous-- I'd have to wake up, go downstairs to the kitchen, pick up my computer, and bring it back up to my room with me. Not a likely course of events!

 

Changing Your Mental Settings

While physical environments can certainly produce changes in behavior,[4] social and intellectual environments can too.

For instance, one of my friends from undergrad took an interesting approach when choosing what major to take. He knew that he wanted a solid private-sector income that would allow him to support a family, but didn't particularly care what field it was in. Overall, he wanted to ensure that whatever major he chose would have the highest possible chance of getting him a good job without unusual effort or circumstances.

Therefore, during winter term of his sophomore year, prior to declaring, he went around to all the seniors he could get to talk to him and asked them what their major was, what they were doing post-graduation, and how much money they anticipated making. He found that the CS majors tended to have more private-sector job prospects and higher average starting salaries than students in other fields, so he decided to declare a CS major.[5]

While I don't think my friend's approach is necessarily the best possible option for determining what to do with your life, it certainly beats the sort of unstructured guessing that I've seen many others do. By considering academic majors as settings and examining what setting produced the best result on average, my friend managed to find a field and career that he's by all indications quite happy in-- and with a minimal amount of risk and stress involved.

 

Conclusion

Human psychology is greatly influenced by situational factors, and in more ways than a naive reasoner might expect. If you're looking to improve your life across any particular axis, one good way to start is by examining your current physical, social, and intellectual settings and paying close attention to how changes in those settings might help accomplish your goals.

 

[1] If you don't believe that this is true, I advise simulating that you do and going on anyway. I find this method effective enough for me and others and easy enough to implement that it seems well worth testing, even if you don't fully believe in the claims behind it. At worst, it might become a potential epistemic/instrumental tradeoff.

[2] See for instance Joseph Heath and Joel Anderson, Procrastination and the Extended Will (2009).

[3] In the course of researching and writing this post, I encountered some objections to the resource expenditure theory of willpower (many of which have already been summarized here by Jess_Riedel). I believe my beliefs regarding willpower loss while tired/just awakening may be limiting in the same sense that believing willpower is a limited resource appears limiting, but have yet to test at the time of this writing.

[4] If you're interested in seeing other examples of ways in which we can structure the physical objects around us in order to become more productive, you may wish to check out Alicorn's How to Have Things Correctly and fowlertm's related How to Have Space Correctly. Several of Alyssa Vance's Random Life Tips also relate to this matter.

[5] The friend in question is now employed as a software engineer at a tech company and by all indications loves his job. Note though that this post isn't saying "you should be a CS major." Things change over time, and what was a good choice for one person and one time may not be a good choice for another person or another time.

The Centre for Applied Rationality: a year later from a (somewhat) outside perspective

40 Swimmer963 27 May 2013 06:31PM

I recently had the privilege of being a CFAR alumni volunteering at a later workshop, which is a fascinating thing to do, and put me in a position both to evaluate how much of a difference the first workshop actually made in my life, and to see how the workshops themselves have evolved. 

Exactly a year ago, I attended one of the first workshops, back when they were still inexplicably called “minicamps”. I wasn't sure what to expect, and I especially wasn't sure why I had been accepted. But I bravely bullied the nursing faculty staff until they reluctantly let me switch a day of clinical around, and later stumbled off my plane into the San Francisco airport in a haze of exhaustion. The workshop spat me out three days later, twice as exhausted, with teetering piles of ideas and very little time or energy to apply them. I left with a list of annual goals, which I had never bothered to have before, and a feeling that more was possible–this included the feeling that more would have been possible if the workshop had been longer and less chaotic, if I had slept more the week before, if I hadn't had to rush out on Sunday evening to catch a plane and miss the social. 

Like I frequently do on Less Wrong the website, I left the minicamp feeling a bit like an outsider, but also a bit like I had come home. As well as my written goals, I made an unwritten pre-commitment to come back to San Francisco later, for longer, and see whether I could make the "more is possible" in my head more specific. Of my thirteen written goals on my list, I fully accomplished only four and partially accomplished five, but I did make it back to San Francisco, at the opportunity cost of four weeks of sacrificed hospital shifts. 

A week or so into my stay, while I shifted around between different rationalist shared houses and attempted to max out interesting-conversations-for-day, I found out that CFAR was holding another May workshop. I offered to volunteer, proved my sincerity by spending 6 hours printing and sticking nametags, and lived on site for another 4-day weekend of delightful information overload and limited sleep. 

Before the May 2012 workshop, I had a low prior that any four-day workshop could be life-changing in a major way. A four-year nursing degree, okay–I've successfully retrained my social skills and my ability to react under pressure by putting myself in particular situations over and over and over and over again. Four days? Nah. Brains don't work that way. 

In my experience, it's exceedingly hard for the human brain to do anything deliberately. In Kahneman-speak, habits are System 1, effortless and automatic. Doing things on purpose involves System 2, effortful and a bit aversive. I could have had a much better experience in my final intensive care clinical if I'd though to open up my workshop notes and tried to address the causes of aversions, or use offline time to train habits, or, y'know, do anything on purpose instead of floundering around trying things at random until they worked. 

(The again, I didn't apply concepts like System 1 and System 2 to myself a year ago. I read 'Thinking Fast and Slow' by Kahneman and 'Rationality and the Reflective Mind' by Stanovich as part of my minicamp goal 'read 12 hard nonfiction books this year', most of which came from the CFAR recommended reading list. If my preceptor had had any idea what I was saying when I explained to her that she was running particular nursing skills on System 1, because they were engrained on the level of habit, and I was running the same tasks on System 2 in working memory because they were new and confusing to me, and that was why I appeared to have poor time management, because System 2 takes forever to do anything, this terminology might have helped. Oh, for the world where everyone knows all jargon!)

...And here I am, setting aside a month of my life to think only about rationality. I can't imagine that my counterfactual self-who-didn't-attend-in-May-2012 would be here. I can't imagine that being here now will have zero effect on what I'm doing in a year, or ten years. Bingo. I did one thing deliberately!

So what was the May 2013 workshop actually like?

The curriculum has shifted around a lot in the past year, and I think with 95% probability that it's now more concretely useful. (Speaking of probabilities, the prediction markets during the workshop seemed to flow better and be more fun and interesting this time, although this may just show that I was more averse to games in general and betting in particular. In that case, yay for partly-cured aversions!)

The classes are grouped in an order that allows them to build on each other usefully, and they've been honed by practice into forms that successfully teach skills, instead of just putting words in the air and on flipcharts. For example, having a personal productivity system like GTD came across as a culturally prestigious thing at the last workshop, but there wasn't a lot of useful curriculum on it. Of course, I left on this trip wanting to spend my offline month creating with a GTD system better than paper to-do lists taped to walls, so I have both motivation and a low threshold for improvement. 

There are also some completely new classes, including "Againstness training" by Valentine, which seem to relate to some of the 'reacting under pressure' stuff in interesting ways, and gave me vocabulary and techniques for something I've been doing inefficiently by trial and error for a good part of my life.

In general, there are more classes about emotions, both how to deal with them when they're in the way and how to use them when they're the best tool available. Given that none of us are Spock, I think this is useful. 

Rejection therapy has morphed into a less terrifying and more helpful form with the awesome name of CoZE (Comfort Zone Expansion). I didn't personally find the original rejection therapy all that awful, but some people did, and that problem is largely solved. 

The workshops are vastly more orderly and organized. (I like to think I contributed to this slightly with my volunteer skills of keeping the fridge stocked with water bottles and calling restaurants to confirm orders and make sure food arrived on time.) Classes began and ended on time. The venue stayed tidy. The food was excellent. It was easier to get enough sleep. Etc. The May 2012 venue had a pool, and this one didn't, which made exercise harder for addicts like me. CFAR staff are talking about solving this. 

The workshops still aren't an easy environment for introverts. The negative parts of my experience in May 2012 were mostly because of this. It was easier this time, because as a volunteer I could skip classes if I started to feel socially overloaded, but periods of quiet alone time had to be effortfully carved out of the day, and at an opportunity cost of missing interesting conversations. I'm not sure if this problem is solvable without either making the workshops longer, in order to space the material out, and thus less accessible for people with jobs, or by cutting out curriculum. Either would impose a cost on the extroverts who don't want an hour at lunch to meditate or go running alone or read a sci-fi book, etc. 

In general, I found the May 2012 workshop too short and intense–we had material thrown at us at a rate far exceeding the usual human idea-digestion rate. Keeping in touch via Skype chats with other participants helped. CFAR now does official followups with participants for six weeks following the workshop. 

Meeting the other participants was, as usual, the best part of the weekend. The group was quite diverse, although I was still the only health care professional there. (Whyyy???? The health care system needs more rationality so badly!) The conversations were engaging. Many of the participants seem eager to stay in touch. The May 2012 workshop has a total of six people still on the Skype chats list, which is a 75% attrition rate. CFAR is now working on strategies to help people who want to stay in touch do it successfully. 

Conclusions?

I thought the May 2012 workshop was awesome. I thought the May 2013 workshop was about an order of magnitude more awesome. I would say that now is a great time to attend a CFAR workshop...except that the organization is financially stable and likely to still be around in a year and producing even better workshops. So I'm not sure. Then again, rationality skills have compound interest–the value of learning some new skills now, even if they amount more to vocab words and mental labels than superpowers, compounds over the year that you spend seeing all the books you read and all the opportunities you have in that framework. I'm glad I went a year ago instead of this May. I'm even more glad I had the opportunity to see the new classes and meet the new participants a year later. 


Epistemic and Instrumental Tradeoffs

20 katydee 19 May 2013 07:49AM

Related: What Do We Mean By "Rationality?"

Epistemic rationality and instrumental rationality are both useful. However, some things may benefit one form of rationality yet detract from another. These tradeoffs are often not obvious, but can have serious consequences.

For instance, take the example of learning debate skills. While involved in debate in high school, I learned how to argue a position quite convincingly, muster strong supporting evidence, prepare rebuttals for counterarguments, prepare deflections for counterarguments that are difficult to rebut, and so on.

I also learned how to do so regardless of what side of a topic I was assigned to.

My debate experience has made me a more convincing and more charismatic person, improved my public speaking skills, and bolstered my ability to win arguments. Instrumentally speaking, this can be a very useful skillset. Epistemically speaking, this sort of preparation is very dangerous, and I later had to unlearn many of these thought patterns in order to become better at finding the truth.

For example, when writing research papers, the type of motivated cognition used when searching for evidence to bolster a position in a debate is often counterproductive. Similarly, when discussing what the best move for my business to make is, the ability to argue convincingly for a position regardless of whether it is right is outright dangerous, and lessons learned from debate may actually decrease the odds of making the correct decision-- if I'm wrong but convincing and my colleagues are right but unconvincing, we could very well end up going down the wrong path!

Epistemic and instrumental goals may also conflict in other ways. For instance, Kelly (2003)[1] points out that, from an epistemic rationality perspective, learning movie spoilers is desirable, since they will improve your model of the world. Nevertheless, many people consider spoilers to be instrumentally negative, since they prefer the tension of not knowing what will happen while they watch a movie.

Bostrom (2011)[2] describes many more situations where having a more accurate model of the world can be hazardous to various instrumental objectives. For instance, knowing where the best parties are held on campus can be a very useful piece of knowledge to have in many contexts, but can become a distracting temptation when you're writing your thesis. Knowing that one of your best friends has just died can be very relevant to your model of the world, but can also cause you to become dangerously depressed. Knowing that Stalin's wife didn't die from appendicitis can be useful for understanding certain motivations, but can be extraordinarily dangerous to know if the secret police come calling.

Thus, epistemic and instrumental rationality can in some cases come into conflict. Some instrumental skillsets might be better off neglected for reasons of epistemic hygeine; similarly, some epistemic ventures might yield information that it would be instrumentally better not to know. When developing rationality practices and honing one's skills, we should take care to acknowledge these tradeoffs and plan accordingly.

 

[1] Kelly, T., (2003). Epistemic Rationality as Instrumental Rationality: A Critique. Philosophy and Phenomenological Research, 66(3), pp. 612-640.

[2] Bostrom, N., (2011). Information Hazards: A Typology of Harms from Knowledge. Review of Contemporary Philosophy, 10, pp. 44-79.

View more: Next