Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

My experience of the recent CFAR workshop

16 Kaj_Sotala 27 November 2014 04:17PM

Originally posted at my blog.

---

I just got home from a four-day rationality workshop in England that was organized by the Center For Applied Rationality (CFAR). It covered a lot of content, but if I had to choose a single theme that united most of it, it was listening to your emotions.

That might sound like a weird focus for a rationality workshop, but cognitive science has shown that the intuitive and emotional part of the mind (”System 1”) is both in charge of most of our behavior, and also carries out a great deal of valuable information-processing of its own (it’s great at pattern-matching, for example). Much of the workshop material was aimed at helping people reach a greater harmony between their System 1 and their verbal, logical System 2. Many of people’s motivational troubles come from the goals of their two systems being somehow at odds with each other, and we were taught to have our two systems have a better dialogue with each other, harmonizing their desires and making it easier for information to cross from one system to the other and back.

To give a more concrete example, there was the technique of goal factoring. You take a behavior that you often do but aren’t sure why, or which you feel might be wasted time. Suppose that you spend a lot of time answering e-mails that aren’t actually very important. You start by asking yourself: what’s good about this activity, that makes me do it? Then you try to listen to your feelings in response to that question, and write down what you perceive. Maybe you conclude that it makes you feel productive, and it gives you a break from tasks that require more energy to do.

Next you look at the things that you came up with, and consider whether there’s a better way to accomplish them. There are two possible outcomes here. Either you conclude that the behavior is an important and valuable one after all, meaning that you can now be more motivated to do it. Alternatively, you find that there would be better ways of accomplishing all the goals that the behavior was aiming for. Maybe taking a walk would make for a better break, and answering more urgent e-mails would provide more value. If you were previously using two hours per day on the unimportant e-mails, possibly you could now achieve more in terms of both relaxation and actual productivity by spending an hour on a walk and an hour on the important e-mails.

At this point, you consider your new plan, and again ask yourself: does this feel right? Is this motivating? Are there any slight pangs of regret about giving up my old behavior? If you still don’t want to shift your behavior, chances are that you still have some motive for doing this thing that you have missed, and the feelings of productivity and relaxation aren’t quite enough to cover it. In that case, go back to the step of listing motives.

Or, if you feel happy and content about the new direction that you’ve chosen, victory!

Notice how this technique is all about moving information from one system to another. System 2 notices that you’re doing something but it isn’t sure why that is, so it asks System 1 for the reasons. System 1 answers, ”here’s what I’m trying to do for us, what do you think?” Then System 2 does what it’s best at, taking an analytic approach and possibly coming up with better ways of achieving the different motives. Then it gives that alternative approach back to System 1 and asks, would this work? Would this give us everything that we want? If System 1 says no, System 2 gets back to work, and the dialogue continues until both are happy.

Again, I emphasize the collaborative aspect between the two systems. They’re allies working for common goals, not enemies. Too many people tend towards one of two extremes: either thinking that their emotions are stupid and something to suppress, or completely disdaining the use of logical analysis. Both extremes miss out on the strengths of the system that is neglected, and make it unlikely for the person to get everything that they want.

As I was heading back from the workshop, I considered doing something that I noticed feeling uncomfortable about. Previous meditation experience had already made me more likely to just attend to the discomfort rather than trying to push it away, but inspired by the workshop, I went a bit further. I took the discomfort, considered what my System 1 might be trying to warn me about, and concluded that it might be better to err on the side of caution this time around. Finally – and this wasn’t a thing from the workshop, it was something I invited on the spot – I summoned a feeling of gratitude and thanked my System 1 for having been alert and giving me the information. That might have been a little overblown, since neither system should actually be sentient by itself, but it still felt like a good mindset to cultivate.

Although it was never mentioned in the workshop, what comes to mind is the concept of wu-wei from Chinese philosophy, a state of ”effortless doing” where all of your desires are perfectly aligned and everything comes naturally. In the ideal form, you never need to force yourself to do something you don’t want to do, or to expend willpower on an unpleasant task. Either you want to do something and do, or don’t want to do it, and don’t.

A large number of the workshop’s classes – goal factoring, aversion factoring and calibration, urge propagation, comfort zone expansion, inner simulation, making hard decisions, Hamming questions, againstness – were aimed at more or less this. Find out what System 1 wants, find out what System 2 wants, dialogue, aim for a harmonious state between the two. Then there were a smaller number of other classes that might be summarized as being about problem-solving in general.

The classes about the different techniques were interspersed with ”debugging sessions” of various kinds. In the beginning of the workshop, we listed different bugs in our lives – anything about our lives that we weren’t happy with, with the suggested example bugs being things like ”every time I talk to so-and-so I end up in an argument”, ”I think that I ‘should’ do something but don’t really want to”, and ”I’m working on my dissertation and everything is going fine – but when people ask me why I’m doing a PhD, I have a hard time remembering why I wanted to”. After we’d had a class or a few, we’d apply the techniques we’d learned to solving those bugs, either individually, in pairs, or small groups with a staff member or volunteer TA assisting us. Then a few more classes on techniques and more debugging, classes and debugging, and so on.

The debugging sessions were interesting. Often when you ask someone for help on something, they will answer with direct object-level suggestions – if your problem is that you’re underweight and you would like to gain some weight, try this or that. Here, the staff and TAs would eventually get to the object-level advice as well, but first they would ask – why don’t you want to be underweight? Okay, you say that you’re not completely sure but based on the other things that you said, here’s a stupid and quite certainly wrong theory of what your underlying reasons for it might be, how does that theory feel like? Okay, you said that it’s mostly on the right track, so now tell me what’s wrong with it? If you feel that gaining weight would make you more attractive, do you feel that this is the most effective way of achieving that?

Only after you and the facilitator had reached some kind of consensus of why you thought that something was a bug, and made sure that the problem you were discussing was actually the best way to address to reasons, would it be time for the more direct advice.

At first, I had felt that I didn’t have very many bugs to address, and that I had mostly gotten reasonable advice for them that I might try. But then the workshop continued, and there were more debugging sessions, and I had to keep coming up with bugs. And then, under the gentle poking of others, I started finding the underlying, deep-seated problems, and some things that had been motivating my actions for the last several months without me always fully realizing it. At the end, when I looked at my initial list of bugs that I’d come up with in the beginning, most of the first items on the list looked hopelessly shallow compared to the later ones.

Often in life you feel that your problems are silly, and that you are affected by small stupid things that ”shouldn’t” be a problem. There was none of that at the workshop: it was tacitly acknowledged that being unreasonably hindered by ”stupid” problems is just something that brains tend to do.  Valentine, one of the staff members, gave a powerful speech about ”alienated birthrights” – things that all human beings should be capable of engaging in and enjoying, but which have been taken from people because they have internalized beliefs and identities that say things like ”I cannot do that” or ”I am bad at that”. Things like singing, dancing, athletics, mathematics, romantic relationships, actually understanding the world, heroism, tackling challenging problems. To use his analogy, we might not be good at these things at first, and may have to grow into them and master them the way that a toddler grows to master her body. And like a toddler who’s taking her early steps, we may flail around and look silly when we first start doing them, but these are capacities that – barring any actual disabilities – are a part of our birthright as human beings, which anyone can ultimately learn to master.

Then there were the people, and the general atmosphere of the workshop. People were intelligent, open, and motivated to work on their problems, help each other, and grow as human beings. After a long, cognitively and emotionally exhausting day at the workshop, people would then shift to entertainment ranging from wrestling to telling funny stories of their lives to Magic: the Gathering. (The game of ”bunny” was an actual scheduled event on the official agenda.) And just plain talk with each other, in a supportive, non-judgemental atmosphere. It was the people and the atmosphere that made me the most reluctant to leave, and I miss them already.

Would I recommend CFAR’s workshops to others? Although my above description may sound rather gushingly positive, my answer still needs to be a qualified ”mmmaybe”. The full price tag is quite hefty, though financial aid is available and I personally got a very substantial scholarship, with the agreement that I would pay it at a later time when I could actually afford it.

Still, the biggest question is, will the changes from the workshop stick? I feel like I have gained a valuable new perspective on emotions, a number of useful techniques, made new friends, strengthened my belief that I can do the things that I really set my mind on, and refined the ways by which I think of the world and any problems that I might have – but aside for the new friends, all of that will be worthless if it fades away in a week. If it does, I would have to judge even my steeply discounted price as ”not worth it”. That said, the workshops do have a money-back guarantee if you’re unhappy with the results, so if it really feels like it wasn’t worth it, I can simply choose to not pay. And if all the new things do end up sticking, it might still turn out that it would have been worth paying even the full, non-discounted price.

CFAR does have a few ways by which they try to make the things stick. There will be Skype follow-ups with their staff, for talking about how things have been going since the workshop. There is a mailing list for workshop alumni, and the occasional events, though the physical events are very US-centric (and in particular, San Francisco Bay Area-centric).

The techniques that we were taught are still all more or less experimental, and are being constantly refined and revised according to people’s experiences. I have already been thinking of a new skill that I had been playing with for a while before the workshop, and which has a bit of that ”CFAR feel” – I will aim to have it written up soon and sent to the others, and maybe it will eventually make its way to the curriculum of a future workshop. That should help keep me engaged as well.

We shall see. Until then, as they say in CFAR – to victory!

Comment author: SilentCal 21 November 2014 09:15:21PM *  1 point [-]

Looks brilliant!

Just read through your blog posts (indexed reasonably well on your mailing list at https://groups.google.com/forum/?fromgroups#!forum/the-fundamental-question), and I recommend the same to others, especially http://kajsotala.fi/2013/12/bayesian-academy-game-constraints/

EDIT: I've been able to answer most of my questions/concerns myself from reading those posts. Are they still accurate as to the end goal of the project?

Comment author: Kaj_Sotala 27 November 2014 04:12:53PM 0 points [-]

Are they still accurate as to the end goal of the project?

Very generally speaking yes, but some of the more ambitious stuff (e.g. the elaborate social relationship mechanics) are likely to be left out from the version that I'm currently working on, since their direct educational value would be hard to justify to my supervisor. They might get in a later version, though.

Comment author: iarwain1 20 November 2014 04:41:08PM *  3 points [-]

I wonder if you could make this project open source and let other LW members contribute. This looks like a fun and possibly useful project that many in the community would be willing to contribute to.

Comment author: Kaj_Sotala 20 November 2014 06:08:21PM *  5 points [-]

https://github.com/ksotala/BayesGame

It's a mess, but feel free to poke around and laugh, and then improve it once you've finished laughing. Or to make your own competing game. :-)

Comment author: eli_sennesh 18 November 2014 11:40:22AM 1 point [-]

I could hug you. I owe you a drink. This is precisely the direction I was thinking FAI research should be heading in!

Your preprint is inaccessible and I'm on the other side of the planet, so I can't actually do any of the things listed above, but they are firmly on my TODO list.

Comment author: Kaj_Sotala 20 November 2014 03:22:58PM 0 points [-]

Thanks! I'll take you up on the drink offer if I ever end up on your side of the planet. :)

If you can't access the academia.edu copy, does this link work?

Comment author: spatiality 20 November 2014 08:21:11AM 2 points [-]

That's awesome you are working on this!! I really would love to play it, with little regard to the outcome ( I pit it somewhere between quite interesting and totally super helpful) I wondered though whether you could make the player tally the probabilities himself and keep the scorings hidden yet allow taking notes ("hard mode"? and "hell" with no prior info on time limits? haha I can totally see being hit by the planning fallacy when trying to figure that one out for the first few times) I also imagine leaving the graphics away for now seems like the sane thing to do, even if you might want to think about it again after it is finished. Because of spreading the art.

Comment author: Kaj_Sotala 20 November 2014 03:13:53PM 2 points [-]

I did think about something like the "hard mode" but left it out as infeasible right now. Maybe at some point. :)

Comment author: V_V 20 November 2014 01:35:59PM 7 points [-]

Your model of the world has been updated! The prior of the variable 'Monster Near The Academy' is now 0%.

Priors don't get updated, posteriors do. Moreover, if the posterior probability becomes 0, then you will be unable to recognize monsters afterwards, and you will not be able to further update your model for this variable. It looks like you are overupdating.

Comment author: Kaj_Sotala 20 November 2014 03:12:44PM 3 points [-]

Thanks for the comments!

Priors don't get updated, posteriors do.

That's technically true, though it felt to me like such a common abuse of terminology that it could be allowed to slide. That said, if I just said "the probability of the variable", that would avoid the problem. (That probability may still be listed as a "prior variable" the next time it's used in a calculation... but then it's a prior for that calculation, so that's probably okay.)

Moreover, if the posterior probability becomes 0, then you will be unable to recognize monsters afterwards, and you will not be able to further update your model for this variable.

That's true, too. I was thinking that the belief networks aren't supposed to literally represent the protagonist's complete set of beliefs about the world, just some set of explicitly-held hypotheses, and she's still capable of realizing that something that she assigned a 0% probability actually happened. After all, the boy could have been looking in her direction because of something that was neither her response nor a monster, say a beautiful bird... which wasn't even assigned a 0% probability, it wasn't represented in the model in the first place. But it's not like she'd have been incapable of realizing that possibility, had it been pointed out to her - she just didn't think of it.

Comment author: Articulator 20 November 2014 12:16:38AM 3 points [-]

This looks really interesting - do you have a timeframe on a playable demo, Kaj?

I sympathize with you on the Java - easier than most other methods, but oh god the lack of style. I think even just making those choice buttons a little less default (non-serif font, lose the blue shading) could move it a fair way toward being presentable.

My primary concern currently is that even if you have a robust engine to abstract much of the coding, this looks like it would have a very poor input to output time ratio. Do you have any plans for circumventing that, or do you have enough time to brute force it?

Comment author: Kaj_Sotala 20 November 2014 10:39:13AM 2 points [-]

Hopefully within a few months: since this is for my thesis, I have the chance to work mostly full-time on this until next summer, though some of that time also needs to be spent on collecting data on test subjects and finding out whether they actually learn things from playing the game.

Comment author: John_Maxwell_IV 20 November 2014 02:41:06AM *  5 points [-]

Nice work!

Did you consider using one of those fancy new JavaScript game frameworks so your game can trivially be distributed through the internet and played on all platforms? (An acquaintance who runs a game site reports that web-based games on his site get more plays than downloadable ones.)

I found this on Google, not sure if the code will be useful: http://pl4n3.blogspot.com/2013/07/bayesjs-bayesian-networks-javascript.html

Comment author: Kaj_Sotala 20 November 2014 10:37:12AM *  1 point [-]

At a later stage, possibly. Right now I'm just focused on getting a playable and fun version out in a language/framework I happen to be familiar with already, and think about optimizing the platform for maximal reach later on. Getting an in-browser version would be good, though.

Comment author: IlyaShpitser 20 November 2014 01:20:08AM *  8 points [-]

Neat! Two suggestions:

(a) A trial (with evidence and a verdict) is a good way to show how to update beliefs as new evidence comes to light, if there is room in the game for that. It's such a natural thing to use Bayes nets in this context that lawyers invented an early version called 'Wigmore charts'.

(b) It would be neat to demonstrate confounding bias somehow (e.g. a common cause cancelling out an existing relationship, or explaining it away entirely).

Comment author: Kaj_Sotala 20 November 2014 10:35:22AM 2 points [-]

Thanks, I have to look up Wigmore charts!

I was intending on having something like confounding bias appear in the form of the protagonist's model of the world being gradually updated to contain larger and more detailed networks, so e.g. two variables that appeared to have a causal relationship in an early network would turn out to have a common cause in a later, more accurate one. (The player can acquire more accurate networks either by allocating time to studies and learning from the work of others, or by experimenting themselves. Not sure of the exact mechanics for these yet.)

Comment author: ArisKatsaris 20 November 2014 10:22:28AM *  5 points [-]

Since in the immediately next part the protagonist assigns a 50% chance on a monster being behind her back (because she doesn't know for sure either way), I'm guessing that the concept of having reasonable priors is one which is supposed to get gradually introduced. This early on, the character's estimates seem to be tending to go from a false 50% to either a false 0/% or a false 100%.

So, yeah, what you suggest is probably too complex to start with.

Comment author: Kaj_Sotala 20 November 2014 10:25:37AM 3 points [-]

You're absolutely correct.

View more: Next