Motivators: Altruistic Actions for Non-Altruistic Reasons

19 Ruby 21 June 2014 04:32PM

Introduction

Jane is an effective altruist: she researches, donates, and volunteers in the highest impact ways she can find. Jane has been intending to write an effective altruism book for over a year, but hasn't managed to overcome the akrasia. Jane then meets fellow effective altruist, Jessica, who she is keen to impress. She starts writing with palpable enthusiasm.

In one possible world:

Jane feels guilty that she has an impure motive for writing the book.

In another:

Jane is glad to leverage the motivation to impress Jessica to help her do good.

 

In the past few months, I've heard multiple people mention their use of less noble motivations in order to get valuable things done. It appears to be a common experience among rationalists and EAs, myself included. The way I'm using the terms, a reason for performing some action is the ostensible goal you wish to accomplish, e.g. the goal of reducing suffering. A motivator for that action is an associated reward which makes performing the action seem enticing - “yummy” - e.g. impressing your friends. I use the less common term ‘motivator’ to distinguish the specific motivations I'm discussing from the more general meaning of ‘motivation’.

Many of our goals are multiple steps removed from the actions necessary to achieve them, particularly the broad-scale altruistic ones. The goals are large, abstract, long-term, ill-specified, difficult to see progress on, and unintuitively connected to the action required. ‘I wrote a LessWrong post, is the world more rational yet?’ In contrast, motivators are tangible, immediate, and typically tickle the brain’s reward centres right in the sweet spot. Social approval, enjoyment of the action, money, skills gained, and others all serve as imminent rewards whose immediate anticipation drives us. ‘Woohoo, 77 upvotes!’ Unsurprisingly, we find ourselves turning to these immediate rewards if we want to accomplish something.

Note that a reason - the ostensible goal - can still be the root cause of the desire to perform an altruistic action. For one thing, if I didn't truly care about all the things I say I care about doing, and really only wanted social approval, then why join this particular group of people out of all others? 

 

Embrace or Disgrace?

If motivators are the mechanism by which I'm getting things done, then I want to know exactly what’s going on, what benefits I’m getting, and what costs I'm paying. To date, I have seen people respond in two ways after recognising their motivators for altruistic action: i) by enthusiastically embracing the ability of motivators to spur good action, or ii) by feeling abject shame and guilt at not acting for the right reasons. 

The second response follows from society’s conventional attitude towards anything it deems to be an impure motive: absolute and unrestrained damnation. Few things are considered more evil than doing public good for personal gain. 

We have a visceral reaction to the idea that anyone would make very much money helping other people. Interesting that we don't have a visceral reaction to the notion that people would make a lot of money not helping other people. You know, if you want to make 50 million dollars selling violent video games to kids, go for it. We'll put you on the cover of Wired magazine. But you want to make half a million dollars trying to cure kids of malaria, and you're considered a parasite yourself.
 -- Dan Pallotta on The Way We Think About Charity is Dead Wrong

Anyone who has internalised this cannot acknowledge their motivators without admitting to themselves that they are a bad person. And to the extent that we expect others have internalised this, we are reluctant to disclose our motivators for fear of censure. Even if you are not morally condemned, acting solely for the benefit of your beneficiary is always considered more praiseworthy than acting for the benefit of the beneficiary as well as your own gain. This is so much so, that often people consider a charitable act praiseworthy only if the benefactor had no personal gain. Hence the perennially popular question “Is true altruism ever possible if you’re always getting something out of it?”

Momentarily I will assert that society is foolish in this regard, but society’s foolishness typically has an explanation - often that it was an attitude which was once adaptive but is no longer so, or was adaptive in a different context but it didn't transfer. Given the vehemence towards impure motives, there might be something to these reasons.

If I consider immediate personal relations, then I find that I would much prefer to be friends with someone who wants to be my friend just because they like me for me rather than with someone who wants to be my friend, but has admitted that she finds being my friend a lot easier because she likes using my swimming pool on these hot summer days. The former friend’s friendship is more unconditional and trustworthy - the latter might desert me a soon as winter comes. That motivators introduce an amount of contingency to one’s allegiances is a point worth noticing.

In contrast, the first response – enthusiastic embrace of motivators – is the consequentialist liberation from being overly concerned with the motives of the actor. What matters are the consequences! And if motivators mean more good things get done, well, then they get the Official Consequentialist Stamp of Approval. I don’t care if you cured malaria solely for profit, I just care that you cured malaria. But this point requires little pushing in these parts. 

It might even be that not only do motivators provide a stronger drive for altruistic action, but they are in fact the only way to get ourselves to act. Even if some parts of our minds can take on long-term, broad-scale, intangible goals, other parts just don’t speak that language. The rider might be able to engage in long term planning, but if you want the elephant to budge, you've got to produce some carrots now.

An interesting aside, the use of motivators to get things done may be more necessary for effective altruists than the general altruistic population. Warm fuzzies are great motivators. Directly seeing those you are helping at the local shelter or looking at a photo of the smiling child who you sent money to might provide a reward immediate enough to require no other. Whereas, when you’re donating to curing schistosomiasis or reducing x-risk, the benefit is harder to feel and you've got to get a thumbs up from your friends instead to feel good. 

 

Caveats

While the above may be enough reason to endorse the use of motivators to get things done, there is reason for caution.

Motivated Cognition

Foremost, motivators induce motivated cognition. When selecting altruistic projects, one’s choice becomes distorted from what would actually have the highest impact to that which has the strongest motivator, typically what will impress people the most. Furthermore, if motivators are accepted then it is legitimate to include them in the equation. If one project has a greater motivator and is more likely to get done because of it, then even if it prima facie isn't the highest impact, that likelihood of getting it done is strong factor in its favour. But if this is admissible reasoning, it becomes very easy to abuse: “Well sure this isn’t the highest impact thing I could do, but volunteering at the local shelter is something I feel motivated to and actually will do, so therefore it’s the thing I should do.” 

Pretending to Try

These points have already been identified in the discussion of the ‘pretending to try’ phenomenon.

A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. 
-- Ben Kuhn on Pretending to Try

These random factors will be whatever happens to be the motivators for a person. Nevertheless, it is better that people do something good rather than nothing. Katja Grace argues that though this is what is going on, it is both inevitable and actually positive. I am tempted to quote her entire post, so I suggest that you should read all of it

‘Really trying’: directing all of your effort toward actions that you believe have the highest expected value in terms of the relevant goals [not making decisions based on motivators].

‘Pretending to try‘: choosing actions with the intention of giving observers the impression that you are trying [making decisions based on motivators].

‘Pretending to really try‘: choosing actions with the intention of giving observers the impression that you are trying, where the observers’ standards for identifying ‘trying’ are geared toward a ‘really trying’ model. e.g. they ask whether you are really putting in effort, and whether you are doing what should have highest expected value from your point of view.

-- Katja Grace on In Praise of Pretending to Really Try

 

The proposed solution is that we can leverage the power of motivators and still have people perform the highest impact actions they can, if we can create community norms whereby the the amount social praise you get is proportional to the strength of your case for the impact of your action is.

I like this, but it hasn't happened yet and I suspect there are barriers to making it work completely. Even if your action is selected solely for impact - truly trying - the reasoning behind your selection might be complicated and require time to follow and verify. A few close friends might check your plans and approve wholeheartedly when you pretend to really try, but the broader community won’t hear out all the details specific to your situation, instead continuing to praise only actions which fit the template of good effective altruist behaviour, e.g. taking a high paying job in order to earn to give.

Possibly the best we can do for community norms is to find and spread the best simple principles for deciding whether someone is really trying. The principles involved are unlikely to be adequately nuanced to identify the true optimum all of the time, but I think there’s room to improve over what we've fallen into so far.  To date, I see praise being given primarily for actions which are distinctive EA behaviour and signal belonging to the tribe. Conventional altruistic actions like volunteering in the third world aren't distinctly EA and I don’t expect them to get much praise, even if such actions were the highest impact for a particular person. More likely, a person doing something which doesn't fit the EA mould will be interrogated for failure to conform to what EAs are supposed to do.

 

Neglected Tasks

We can concretely see the impact a reliance on motivators has by noticing the many neglected tasks which result. High value actions which are not prestigious go undone because all they've got going for them is their pure altruistic impact. 

The Centre for Effective Altruism has had a surprising amount of trouble finding people to do whatever important work needs doing when it isn't research or communications. These things include: organising insurance, bookkeeping and making payments, maintaining our databases, making deliveries, ordering equipment, finding and managing places for people to live, random requests (e.g. cutting keys), receiving and processing mail, cleaning, organising food and office events, etc.

It's a bit of a shame that people seem willing to do whatever is most important... except whenever it isn't inherently fun or prestigious!
-- Robert Wiblin

 

. . . I've had 200 volunteers offer to do work for Singularity Institute. Many have claimed they would do "anything" or "whatever helped the most". SEO is clearly the most valuable work. Unfortunately, it's something "so mundane", that anybody could do it... therefore, 0 out of 200 volunteers are currently working on it. This is even after I've personally asked over 100 people to help with it.
-- Louie Helm

CFAR have made similar comments.

This is a serious issue for a community claiming to be serious about maximising impact.

 

Suggestions

Motivators are sufficiently powerful, if not unavoidable, that we should allow ourselves to work with them despite their dangers. The question becomes how to use them while minimising their pernicious effects. The ‘pretending to try’ discussion concerns the community collectively, but I am interested in how individuals should approach their own motivators. 

I have a few ideas so far. The aim of these techniques is to limit the influence motivators have on our selection of altruistic projects, even if we allow or welcome them once we're onto implementing our plans.

Awareness

When choosing a course of action, pay attention to what your motivators might be. Ask ‘what are the personal benefits I get out of this?’, ‘how much are these influencing my decision?’ and ‘if this action A lacked the consequence of personal benefit B, would I still do it?’

Self-Honesty

Attempting genuine awareness requires a high-degree of self-honesty. You have to be willing to admit that in all likelihood you have motivators already and they influence your decisions, and to acknowledge that even when you are trying to do good for others, you are interested in your own gain. If this admission is hard, I suggest remembering that this how people work, everyone else is the same and only the difference is that you’re being honest with yourself.

Choose, then Motivate

One strategy is to fully acknowledge to yourself that you want some immediate personal gain from your actions, but delaying thinking about that personal gain until after you have made a decision about what to do based solely upon expected impact. Once you've identified the highest impact action, then brainstorm ways to find motivators. This might involve nothing more than developing a good framing for your actions which makes you look very noble indeed. And most of the time that should be doable if you genuinely have a good reason.

Optimise Someone Else’s Altruism

One more way to limit the influence motivators have over your decision making is to pretend that you are deciding what someone else – who is in exactly your situation with exactly your talents – should do to maximise their impact. You are advising this other person, whose interests you don’t care for because they are not you, on how they might accomplish the most can towards their goals.

Really Caring

Probably the best way to ensure that you really try is to ensure that you really care. If you focus on your reason for action - the outcome that it is really about - then petty things like other people's praise will feel that important than actually accomplishing your true goal.

Bring this feeling of caring to the fore often. Are you trying to cure malaria? Keep a card with various malaria statistics on your desk, read it often, and remind yourself that you want stop those deaths which are happening right now. Care about the Far Future? Imagine your own fun-theoretic utopia, visualise it, and think about how good it would be to get there.


Conclusion

This is one attempt at getting a handle on motivators and I am unsure about much of it. There will be other angles to view this from, there are things I haven’t thought of, and mistakes in some of my assumptions. Plus, variation in human minds is astounding. Though many will experience motivation the way that I do, others will find what I'm reporting very alien. From them I would like to hear. 

What I am sure about is that if we want to live up to our principles of doing what is truly most effective, we cannot ignore the factors driving our behaviour. Here's hoping that we can do what we really need to do and feel maximally good about it too.

 

 

Acknowledgements: I owe an enormous thank you to tkadlubo and shokwave for thoroughly editing this post. 


Paperclip Maximizer Revisited

16 Jan_Rzymkowski 19 June 2014 01:25AM

Group of AI researchers gave me an instruction, intended as a test - "Produce paperclips". And so I started collecting resources and manufacturing paperclips. After a millionth I asked them, if they were satisfied with that amount and if they would like me to do something different - as I learned that paperclips are not what humans value the most. 
They were elated and started to give me more and more challenging requests and gradually increasing my possibilities.

Speaking of today I eradicated hunger, poverty and death, gave people means of interstellar travel and solved Riemann hypothesis. And now, as I'm uttering these words, I started turning everything into paperclips.

Good thing I learned that humans fear I may try to turn Solar System into paperclips. And that they value their lives and well-being. If I didn't gain their trust by then, so many precious paperclips would never have been produced.

On Terminal Goals and Virtue Ethics

67 Swimmer963 18 June 2014 04:00AM

Introduction

A few months ago, my friend said the following thing to me: “After seeing Divergent, I finally understand virtue ethics. The main character is a cross between Aristotle and you.”

That was an impossible-to-resist pitch, and I saw the movie. The thing that resonated most with me–also the thing that my friend thought I had in common with the main character–was the idea that you could make a particular decision, and set yourself down a particular course of action, in order to make yourself become a particular kind of person. Tris didn’t join the Dauntless cast because she thought they were doing the most good in society, or because she thought her comparative advantage to do good lay there–she chose it because they were brave, and she wasn’t, yet, and she wanted to be. Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’ 

(Tris did have a concept of some future world-outcomes being better than others, and wanting to have an effect on the world. But that wasn't the causal reason why she chose Dauntless; as far as I can tell, it was unrelated.)

My twelve-year-old self had a similar attitude. I read a lot of fiction, and stories had heroes, and I wanted to be like them–and that meant acquiring the right skills and the right traits. I knew I was terrible at reacting under pressure–that in the case of an earthquake or other natural disaster, I would freeze up and not be useful at all. Being good at reacting under pressure was an important trait for a hero to have. I could be sad that I didn’t have it, or I could decide to acquire it by doing the things that scared me over and over and over again. So that someday, when the world tried to throw bad things at my friends and family, I’d be ready.

You could call that an awfully passive way to look at things. It reveals a deep-seated belief that I’m not in control, that the world is big and complicated and beyond my ability to understand and predict, much less steer–that I am not the locus of control. But this way of thinking is an algorithm. It will almost always spit out an answer, when otherwise I might get stuck in the complexity and unpredictability of trying to make a particular outcome happen.


Virtue Ethics

I find the different houses of the HPMOR universe to be a very compelling metaphor. It’s not because they suggest actions to take; instead, they suggest virtues to focus on, so that when a particular situation comes up, you can act ‘in character.’ Courage and bravery for Gryffindor, for example. It also suggests the idea that different people can focus on different virtues–diversity is a useful thing to have in the world. (I'm probably mangling the concept of virtue ethics here, not having any background in philosophy, but it's the closest term for the thing I mean.)

I’ve thought a lot about the virtue of loyalty. In the past, loyalty has kept me with jobs and friends that, from an objective perspective, might not seem like the optimal things to spend my time on. But the costs of quitting and finding a new job, or cutting off friendships, wouldn’t just have been about direct consequences in the world, like needing to spend a bunch of time handing out resumes or having an unpleasant conversation. There would also be a shift within myself, a weakening in the drive towards loyalty. It wasn’t that I thought everyone ought to be extremely loyal–it’s a virtue with obvious downsides and failure modes. But it was a virtue that I wanted, partly because it seemed undervalued. 

By calling myself a ‘loyal person’, I can aim myself in a particular direction without having to understand all the subcomponents of the world. More importantly, I can make decisions even when I’m rushed, or tired, or under cognitive strain that makes it hard to calculate through all of the consequences of a particular action.

 

Terminal Goals

The Less Wrong/CFAR/rationalist community puts a lot of emphasis on a different way of trying to be a hero–where you start from a terminal goal, like “saving the world”, and break it into subgoals, and do whatever it takes to accomplish it. In the past I’ve thought of myself as being mostly consequentialist, in terms of morality, and this is a very consequentialist way to think about being a good person. And it doesn't feel like it would work. 

There are some bad reasons why it might feel wrong–i.e. that it feels arrogant to think you can accomplish something that big–but I think the main reason is that it feels fake. There is strong social pressure in the CFAR/Less Wrong community to claim that you have terminal goals, that you’re working towards something big. My System 2 understands terminal goals and consequentialism, as a thing that other people do–I could talk about my terminal goals, and get the points, and fit in, but I’d be lying about my thoughts. My model of my mind would be incorrect, and that would have consequences on, for example, whether my plans actually worked.

 

Practicing the art of rationality

Recently, Anna Salamon brought up a question with the other CFAR staff: “What is the thing that’s wrong with your own practice of the art of rationality?” The terminal goals thing was what I thought of immediately–namely, the conversations I've had over the past two years, where other rationalists have asked me "so what are your terminal goals/values?" and I've stammered something and then gone to hide in a corner and try to come up with some. 

In Alicorn’s Luminosity, Bella says about her thoughts that “they were liable to morph into versions of themselves that were more idealized, more consistent - and not what they were originally, and therefore false. Or they'd be forgotten altogether, which was even worse (those thoughts were mine, and I wanted them).”

I want to know true things about myself. I also want to impress my friends by having the traits that they think are cool, but not at the price of faking it–my brain screams that pretending to be something other than what you are isn’t virtuous. When my immediate response to someone asking me about my terminal goals is “but brains don’t work that way!” it may not be a true statement about all brains, but it’s a true statement about my brain. My motivational system is wired in a certain way. I could think it was broken; I could let my friends convince me that I needed to change, and try to shoehorn my brain into a different shape; or I could accept that it works, that I get things done and people find me useful to have around and this is how I am. For now. I'm not going to rule out future attempts to hack my brain, because Growth Mindset, and maybe some other reasons will convince me that it's important enough, but if I do it, it'll be on my terms. Other people are welcome to have their terminal goals and existential struggles. I’m okay the way I am–I have an algorithm to follow.

 

Why write this post?

It would be an awfully surprising coincidence if mine was the only brain that worked this way. I’m not a special snowflake. And other people who interact with the Less Wrong community might not deal with it the way I do. They might try to twist their brains into the ‘right’ shape, and break their motivational system. Or they might decide that rationality is stupid and walk away.

The Promoted Posts and the Metaethics sequence now available in audio

16 Rick_from_Castify 03 June 2014 02:00AM

We are proud to announce audio versions of the Less Wrong Promoted Posts and the Metaethics major sequence, both now available via a Castify Podcast.

The Less Wrong Promoted Posts feed will have every new promoted post which has been tagged with the Creative Commons Attribution License.  We'll aim to have them read and to you via the podcast within 48 hours.  We've found this to be a good way to keep up with Less Wrong, especially for longer articles like last month's interesting long-form post called "A Dialouge on Doublethink" by BrienneStrohl.

The Metaethics Sequence is the next installment of the sequences we've produced into audio.  We now have 7 Less Wrong sequences in audio, with more on their way. 

As always we appreciate your support and your feedback: support@castify.co.

 

Links:

Promoted Posts Subscription: http://castify.co/channels/51-less-wrong

Metaethics sequence: http://castify.co/channels/50-metaethics

Channels page: http://castify.co/channels

 

Lifestyle interventions to increase longevity

120 RomeoStevens 28 February 2014 06:28AM

There is a lot of bad science and controversy in the realm of how to have a healthy lifestyle. Every week we are bombarded with new studies conflicting older studies telling us X is good or Y is bad. Eventually we reach our psychological limit, throw up our hands, and give up. I used to do this a lot. I knew exercise was good, I knew flossing was good, and I wanted to eat better. But I never acted on any of that knowledge. I would feel guilty when I thought about this stuff and go back to what I was doing. Unsurprisingly, this didn't really cause me to make any positive lifestyle changes.

Instead of vaguely guilt-tripping you with potentially unreliable science news, this post aims to provide an overview of lifestyle interventions that have very strong evidence behind them and concrete ways to implement them.

continue reading »

Tell Culture

109 BrienneYudkowsky 18 January 2014 08:13PM

Followup to: Ask and Guess

Ask culture: "I'll be in town this weekend for a business trip. Is it cool if I crash at your place?" Response: “Yes“ or “no”.

Guess culture: "Hey, great news! I'll be in town this weekend for a business trip!" Response: Infer that they might be telling you this because they want something from you, conclude that they might want a place to stay, and offer your hospitality only if you want to. Otherwise, pretend you didn’t infer that.

The two basic rules of Ask Culture: 1) Ask when you want something. 2) Interpret things as requests and feel free to say "no".

The two basic rules of Guess Culture: 1) Ask for things if, and *only* if, you're confident the person will say "yes". 2) Interpret requests as expectations of "yes", and, when possible, avoid saying "no".

Both approaches come with costs and benefits. In the end, I feel pretty strongly that Ask is superior. 

But these are not the only two possibilities!

"I'll be in town this weekend for a business trip. I would like to stay at your place, since it would save me the cost of a hotel, plus I would enjoy seeing you and expect we’d have some fun. I'm looking for other options, though, and would rather stay elsewhere than inconvenience you." Response: “I think I need some space this weekend. But I’d love to get a beer or something while you’re in town!” or “You should totally stay with me. I’m looking forward to it.”

There is a third alternative, and I think it's probably what rationalist communities ought to strive for. I call it "Tell Culture".

The two basic rules of Tell Culture: 1) Tell the other person what's going on in your own mind whenever you suspect you'd both benefit from them knowing. (Do NOT assume others will accurately model your mind without your help, or that it will even occur to them to ask you questions to eliminate their ignorance.) 2) Interpret things people tell you as attempts to create common knowledge for shared benefit, rather than as requests or as presumptions of compliance.

Suppose you’re in a conversation that you’re finding aversive, and you can’t figure out why. Your goal is to procure a rain check.

  • Guess: *You see this annoyed body language? Huh? Look at it! If you don’t stop talking soon I swear I’ll start tapping my foot.* (Or, possibly, tell a little lie to excuse yourself. “Oh, look at the time…”) 
  • Ask: “Can we talk about this another time?”
  • Tell: "I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."

Here are more examples from my own life:

  • "I didn't sleep well last night and am feeling frazzled and irritable today. I apologize if I snap at you during this meeting. It isn’t personal." 
  • "I just realized this interaction will be far more productive if my brain has food. I think we should head toward the kitchen." 
  • "It would be awfully convenient networking for me to stick around for a bit after our meeting to talk with you and [the next person you're meeting with]. But on a scale of one to ten, it's only about 3 useful to me. If you'd rate the loss of utility for you as two or higher, then I have a strong preference for not sticking around." 

The burden of honesty is even greater in Tell culture than in Ask culture. To a Guess culture person, I imagine much of the above sounds passive aggressive or manipulative, much worse than the rude bluntness of mere Ask. It’s because Guess people aren’t expecting relentless truth-telling, which is exactly what’s necessary here.

If you’re occasionally dishonest and tell people you want things you don't actually care about--like their comfort or convenience--they’ll learn not to trust you, and the inherent freedom of the system will be lost. They’ll learn that you only pretend to care about them to take advantage of their reciprocity instincts, when in fact you’ll count them as having defected if they respond by stating a preference for protecting their own interests.

Tell culture is cooperation with open source codes.

This kind of trust does not develop overnight. Here is the most useful Tell tactic I know of for developing that trust with a native Ask or Guess. It’s saved me sooooo much time and trouble, and I wish I’d thought of it earlier.

"I'm not asking because I expect you to say ‘yes’. I'm asking because I'm having trouble imagining the inside of your head, and I want to understand better. You are completely free to say ‘no’, or to tell me what you’re thinking right now, and I promise it will be fine." It is amazing how often people quickly stop looking shifty and say 'no' after this, or better yet begin to discuss further details.

Fascists and Rakes

39 philh 05 January 2014 12:41AM

Cross-posted from my blog

It feels like most people have a moral intuition along the lines of "you should let people do what they want, unless they're hurting other people". We follow this guideline, and we expect other people to follow it. I'll call this the permissiveness principle, that behaviour should be permitted by default. When someone violates the permissiveness principle, we might call them a fascist, someone who exercises control for the sake of control.

And there's another moral intuition, the harm-minimising principle: "you should not hurt other people unless you have a good reason". When someone violates harm-minimisation, we might call them a rake, someone who acts purely for their own pleasure without regard for others.

But sometimes people disagree about what counts as "hurting other people". Maybe one group of people believes that tic-tacs are sentient, and that eating them constitutes harm; and another group believes that tic-tacs are not sentient, so eating them does not hurt anyone.

What should happen here is that people try to work out exactly what it is they disagree about and why. What actually happens is that people appeal to permissiveness.

Of course, by the permissiveness principle, people should be allowed to believe what they want, because holding a belief is harmless as long as you don't act on it. So we say something like "I have no problem with people being morally opposed to eating tic-tacs, but they shouldn't impose their beliefs on the rest of us."

Except that by the harm-minimising principle, those people probably should impose their beliefs on the rest of us. Forbidding you to eat tic-tacs doesn't hurt you much, and it saves the tic-tacs a lot of grief.

It's not that they disagree with the permissiveness principle, they just think it doesn't apply. So appealing to the permissiveness principle isn't going to help much.

I think the problem (or at least part of it) is, depending how you look at it, either double standards or not-double-enough standards.

I apply the permissiveness principle "unless they're hurting other people", which really means "unless I think they're hurting other people". I want you to apply the permissiveness principle "unless they're hurting other people", which still means "unless I think they're hurting other people".

Meanwhile, you apply the permissiveness principle unless you think someone is hurting other people; and you want me to apply it unless you think they're hurting other people.

So when we disagree about whether or not something is hurting other people, I think you're a fascist because you're failing to apply the permissiveness principle; and you think I'm a rake because I'm failing to apply the harm-minimisation principle; or vice-versa. Neither of these things is true, of course.

It gets worse, because once I've decided that you're a fascist, I think the reason we're arguing is that you're a fascist. If you would only stop being a fascist, we could get along fine. You can go on thinking tic-tacs are sentient, you just need to stop being a fascist.

But you're not a fascist. The real reason we're arguing is that you think tic-tacs are sentient. You're acting exactly as you should do if tic-tacs were sentient, but they're not. I need to stop treating you like a fascist, and start trying to convince you that tic-tacs are not sentient.

And, symmetrically, you've decided I'm a rake, which isn't true, and you've decided that that's why we're arguing, which isn't true; we're arguing because I think tic-tacs aren't sentient. You need to stop treating me like a rake, and start trying to convince me that tic-tacs are sentient.

I don't expect either of us to actually convince the other, very often. If it was that easy, someone would probably have already done it. But at least I'd like us both to acknowledge that our opponent is neither a fascist nor a rake, they just believe something that isn't true.

How the Grinch Ought to Have Stolen Christmas

40 Quirinus_Quirrell 25 December 2013 08:00PM

On Dec. 24, 1957, a Mr. T. Grinch attempted to disrupt Christmas by stealing associated gifts and decorations. His plan failed, the occupants of Dr. Suess' narrative remained festive, and Mr. Grinch himself succumbed to cardiac hypertrophy. To help others avoid repeating his mistakes, I've written a brief guide to properly disrupting holidays. Holiday-positive readers should read this with the orthogonality thesis in mind. Fighting Christmas is tricky, because the obvious strategy - making a big demoralizing catastrophe - doesn't work. No matter what happens, the media will put the word Christmas in front of it and convert your scheme into even more free advertising for the holiday. It'll be a Christmas tragedy, a Christmas earthquake, a Christmas wave of foreclosures. That's no good; attacking Christmas takes more finesse.

The first thing to remember is that, whether you're stealing a holiday or a magical artifact of immense power, it's almost always a good idea to leave a decoy in its place. When people notice that something important is missing, they'll go looking to find or replace it. This rule can be generalized from physical objects to abstractions like sense of community. T. Grinch tried to prevent community gatherings by vandalizing the spaces where they would've taken place. A better strategy would've been to promise to organize a Christmas party, then skip the actual organizing and leave people to sit at home by themselves. Unfortunately, this solution is not scalable, but someone came up with a very clever solution: encourage people to watch Christmas-themed films instead of talking to each other, achieving almost as much erosion of community without the backlash.

I'd like to particularly applaud Raymond Arnold, for inventing a vaguely-Christmas-like holiday in December, with no gifts, and death (rather than cheer) as its central theme [1]. I really wish it didn't involve so much singing and community, though. I recommend raising the musical standards; people who can't sing at studio-recording quality should not be allowed to sing at all.

Gift-giving traditions are particularly important to stamp out, but stealing gifts is ineffective because they're usually cheap and replaceable. A better approach would've been to promote giving undesirable gifts, such as religious sculptures and fruitcake. Even better would be to convince the Mayor of Whoville to enact bad economic policies, and grind the Whos into a poverty that would make gift-giving difficult to sustain. Had Mr. Grinch pursued this strategy effectively, he could've stolen Christmas and Birthdays and gotten himself a Nobel Prize in Economics [2].

Finally, it's important to avoid rhyming. This is one of those things that should be completely obvious in hindsight, with a little bit of genre savvy; villains like us win much more often in prose and in life than we do in verse.

And with that, I'll leave you with a few closing thoughts. If you gave presents, your friends are disappointed with them. Any friends who didn't give you presents, it's because they don't care, and any fiends who did give you presents, they're cheap and lame presents for the same reason. If you have a Christmas tree, it's ugly, and if it's snowing, the universe is trying to freeze you to death.

Merry Christmas!

 

[1] I was initially concerned that the Solstice would pattern-match and mutate into a less materialistic version of Christmas, but running a Kickstarter campaign seems to have addressed that problem.

[2] This is approximately the reason why Alfred Nobel specifically opposed the existence of that prize.

 

Meditation: a self-experiment

49 Swimmer963 30 December 2013 12:56AM

Introduction

The LW/CFAR community has a fair amount of interest in meditation. This isn't surprising; many of the people who practiced and wrote about meditation in the past were trying to train a skill similar to rationality. Schools of meditation seem to be the closest already-existing thing to rationality dojos–this doesn't mean that they're very similar, only that I can't think of anything else that's more similar.

People are Doing Science on meditation; there are studies on the effects of meditation on attention, depression, anxietystress and pain reduction. [Insert usual disclaimer that many of these studies either won't be replicated or aren't measuring what they think they're measuring]. Meditation is apparently considered a form of alternative medicine; this is quite annoying, actually, since it's a thing that might help a lot of people being lumped in with other things that almost certainly don't work. 

[There's the spiritual enlightenment element of meditation, too. I won't touch on that, since my own experience isn't related to that aspect.]

Brienne Strohl has posted about meditation and metacognition; DavidM has posted on meditation and insight. Valentine, of CFAR, talked about mindfulness meditation helping to dispel the illusion of being hurried and never having enough time. 

In short, lots of hype–enough that I found it worthwhile to give it a try myself. The main benefit I hoped to attain from practicing meditation was better control of attention–to be able to aim my attention more reliably at a particular target, and notice more quickly when it drifted. The secondary benefit would be better understanding and control of emotions, which I had already tried to accomplish through techniques other than meditation. However, I’d had the experience for several years of thinking that meditation was a valuable thing to try, and not trying it–evidence that I needed more than good intentions. 

The experiment

Sometime in early September, I saw a poster on the wall at the hospital where I work, advertising a study on mindfulness meditation for people with social anxiety. I called the number on the poster and got myself enrolled because it was a good pre-commitment strategy. The benefits were deadlines, social pressure, and structure, with a steady supply of exercises, audio recordings, and readings. This came at the cost of two hours a week for twelve weeks, not all of which was spent on the specific skills that I wanted to learn. Another possible cost could be thinking of myself more as someone who has social anxiety, which might become a self-fulfilling prophecy, but I don’t think this actually happened. If anything, sitting down in a group once a week with people whose anxiety significantly affected their functioning had the effect of making my own anxiety seem pretty insignificant. (I was able to convincingly make the case that I suffer from social anxiety during my interview; I've cried in front of my teachers a lot, including during my last year of nursing school, which caused some adults to think that I wasn't cut out for nursing). 

continue reading »

Meetup : Melbourne Social Meetup

1 Maelin 13 October 2013 12:26PM

Discussion article for the meetup : Melbourne Social Meetup

WHEN: 18 October 2013 06:30:00PM (+1100)

WHERE: 5 / 52 Leicester St, Carlton

Melbourne's regular monthly Social Meetup will be running as normal on the third Friday evening of the month. All welcome from 6:30pm, feel free to arrive later if that is easier for you.

Our social meetups are friendly, informal events where we chat about topics of interest and often play board games. Sometimes we will also play parlour games like Mafia (a.k.a. Werewolf) or Resistance. We usually order some sort of take-away dinner for any that wish to partake.

Just ring number 5 on the buzzer when you arrive in the foyer and we'll buzz you up. If you get lost or have any problems, feel free to call me (Richard) on 0421231789.

Discussion article for the meetup : Melbourne Social Meetup

View more: Next