Pay other people to go vegetarian for you?

12 jkaufman 12 April 2013 01:56AM

I notice that a large fraction of Effective Altruism people are vegetarian. This makes sense: in general Effective Altruists take moral issues seriously, even when that means changing your lifestyle. I'm not sure it's a good balance, though.

One way to think about this is to convert it into money. How much would I need to be paid to give up eating meat? All animal products? How much money would I need to spend on myself to be about as happy as I would be with less money but continuing to eat animals? I'd probably be willing to go vegetarian for about $500/year, vegan for maybe $2000/year.

It turns out you can probably pay to convince other people to be vegetarian much more cheaply than that. I estimate the cost of a vegetarian-year at $4.29 to $536 while Brian Tomasik estimates $11 with better methodology (which I looked at). This is by placing ads on facebook for a site where people can watch an animial cruelty video and ideally become vegetarian or vegan.

If you would get more than $11/year worth of enjoyment out of continuing to eat meat, why not give $11/year to convince someone else to not eat meat for you? Or give $50/year and be on the safe side?

(While you're giving money, you should probably give it to the organization that you think will do the most good with it, which I think is probably one of GiveWell's top charities. The nice thing about money as opposed to actions is that it's easy to redirect.)

I also posted this on my blog.

Explicit and tacit rationality

40 lukeprog 09 April 2013 11:33PM

Like Eliezer, I "do my best thinking into a keyboard." It starts with a burning itch to figure something out. I collect ideas and arguments and evidence and sources. I arrange them, tweak them, criticize them. I explain it all in my own words so I can understand it better. By then it is nearly something that others would want to read, so I clean it up and publish, say, How to Beat Procrastination. I write essays in the original sense of the word: "attempts."

This time, I'm trying to figure out something we might call "tacit rationality" (c.f. tacit knowledge).

I tried and failed to write a good post about tacit rationality, so I wrote a bad post instead — one that is basically a patchwork of somewhat-related musings on explicit and tacit rationality. Therefore I'm posting this article to LW Discussion. I hope the ensuing discussion ends up leading somewhere with more clarity and usefulness.

 

Three methods for training rationality

Which of these three options do you think will train rationality (i.e. systematized winning, or "winning-rationality") most effectively?

  1. Spend one year reading and re-reading The Sequences, studying the math and cognitive science of rationality, and discussing rationality online and at Less Wrong meetups.
  2. Attend a CFAR workshop, then spend the next year practicing those skills and other rationality habits every week.
  3. Run a startup or small business for one year.

Option 1 seems to be pretty effective at training people to talk intelligently about rationality (let's call that "talking-rationality"), and it seems to inoculate people against some common philosophical mistakes.

We don't yet have any examples of someone doing Option 2 (the first CFAR workshop was May 2012), but I'd expect Option 2 — if actually executed — to result in more winning-rationality than Option 1, and also a modicum of talking-rationality.

What about Option 3? Unlike Option 2 or especially Option 1, I'd expect it to train almost no ability to talk intelligently about rationality. But I would expect it to result in relatively good winning-rationality, due to its tight feedback loops.

 

Talking-rationality and winning-rationality can come apart

I've come to believe... that the best way to succeed is to discover what you love and then find a way to offer it to others in the form of service, working hard, and also allowing the energy of the universe to lead you.

Oprah Winfrey

Oprah isn't known for being a rational thinker. She is a known peddler of pseudoscience, and she attributes her success (in part) to allowing "the energy of the universe" to lead her.

Yet she must be doing something right. Oprah is a true rags-to-riches story. Born in Mississippi to an unwed teenage housemaid, she was so poor she wore dresses made of potato sacks. She was molested by a cousin, an uncle, and a family friend. She became pregnant at age 14.

But in high school she became an honors student, won oratory contests and a beauty pageant, and was hired by a local radio station to report the news. She became the youngest-ever news anchor at Nashville's WLAC-TV, then hosted several shows in Baltimore, then moved to Chicago and within months her own talk show shot from last place to first place in the ratings there. Shortly afterward her show went national. She also produced and starred in several TV shows, was nominated for an Oscar for her role in a Steven Spielberg movie, launched her own TV cable network and her own magazine (the "most successful startup ever in the [magazine] industry" according to Fortune), and became the world's first female black billionaire.

I'd like to suggest that Oprah's climb probably didn't come merely through inborn talent, hard work, and luck. To get from potato sack dresses to the Forbes billionaire list, Oprah had to make thousands of pretty good decisions. She had to make pretty accurate guesses about the likely consequences of various actions she could take. When she was wrong, she had to correct course fairly quickly. In short, she had to be fairly rational, at least in some domains of her life.

Similarly, I know plenty of business managers and entrepreneurs who have a steady track record of good decisions and wise judgments, and yet they are religious, or they commit basic errors in logic and probability when they talk about non-business subjects.

What's going on here? My guess is that successful entrepreneurs and business managers and other people must have pretty good tacit rationality, even if they aren't very proficient with the "rationality" concepts that Less Wrongers tend to discuss on a daily basis. Stated another way, successful businesspeople make fairly rational decisions and judgments, even though they may confabulate rather silly explanations for their success, and even though they don't understand the math or science of rationality well.

LWers can probably outperform Mark Zuckerberg on the CRT and the Berlin Numeracy Test, but Zuckerberg is laughing at them from atop a huge pile of utility.

 

Explicit and tacit rationality

Patri Friedman, in Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality, reminded us that skill acquisition comes from deliberate practice, and reading LW is a "shiny distraction," not deliberate practice. He said a real rationality practice would look more like... well, what Patri describes is basically CFAR, though CFAR didn't exist at the time.

In response, and again long before CFAR existed, Anna Salamon wrote Goals for which Less Wrong does (and doesn't) help. Summary: Some domains provide rich, cheap feedback, so you don't need much LW-style rationality to become successful in those domains. But many of us have goals in domains that don't offer rapid feedback: e.g. whether to buy cryonics, which 40-year investments are safe, which metaethics to endorse. For this kind of thing you need LW-style rationality. (We could also state this as "Domains with rapid feedback train tacit rationality with respect to those domains, but for domains without rapid feedback you've got to do the best you can with LW-style "explicit rationality".)

The good news is that you should be able to combine explicit and tacit rationality. Explicit rationality can help you realize that you should force tight feedback loops into whichever domains you want to succeed in, so that you can have develop good intuitions about how to succeed in those domains. (See also: Lean Startup or Lean Nonprofit methods.)

Explicit rationality could also help you realize that the cognitive biases most-discussed in the literature aren't necessarily the ones you should focus on ameliorating, as Aaron Swartz wrote:

Cognitive biases cause people to make choices that are most obviously irrational, but not most importantly irrational... Since cognitive biases are the primary focus of research into rationality, rationality tests mostly measure how good you are at avoiding them... LW readers tend to be fairly good at avoiding cognitive biases... But there a whole series of much more important irrationalities that LWers suffer from. (Let's call them "practical biases" as opposed to "cognitive biases," even though both are ultimately practical and cognitive.)

...Rationality, properly understood, is in fact a predictor of success. Perhaps if LWers used success as their metric (as opposed to getting better at avoiding obvious mistakes), they might focus on their most important irrationalities (instead of their most obvious ones), which would lead them to be more rational and more successful.


Final scattered thoughts

  • If someone is consistently winning, and not just because they have tons of wealth or fame, then maybe you should conclude they have pretty good tacit rationality even if their explicit rationality is terrible.
  • The positive effects of tight feedback loops might trump the effects of explicit rationality training.
  • Still, I suspect explicit rationality plus tight feedback loops could lead to the best results of all.
  • I really hope we can develop a real rationality dojo.
  • If you're reading this post, you're probably spending too much time reading Less Wrong, and too little time hacking your motivation system, learning social skills, and learning how to inject tight feedback loops into everything you can.

[Link] Values Spreading is Often More Important than Extinction Risk

11 Pablo_Stafforini 07 April 2013 05:14AM

In a recent essay, Brian Tomasik argues that meme-spreading has higher expected utility than x-risk reduction. His analysis assumes a classical utilitarian ethic, but it may be generalizable to other value systems.  Here's the summary:

I personally do not support efforts to reduce extinction risk because I think space colonization would potentially give rise to astronomical amounts of suffering. However, even if I thought reducing extinction risk was a good idea, I would not work on it, because spreading your particular values has generally much higher leverage than being one more voice for safety measures against extinction in a world where reducing extinction risk is hard and almost everyone has some incentives to invest in the issue.

[link] Friendly AI and Utilitarianism

8 lukeprog 15 August 2011 11:34PM

I've begun an online discussion with Alan Dawrst (Brian Tomasik) of utilitarian-essays.com concerning Friendly AI and utilitarianism. Interested parties may wish to follow along or participate.

The forum thread now contains many overlapping discussions. For clarity, here's an index of the narrow, core discussion between Alan and I:

 

  1. Luke #1
  2. Alan #1
  3. Luke #2
  4. Alan #2
  5. Luke #3
  6. Alan #3
  7. Luke #4
  8. Alan #4
  9. Luke #5
  10. Alan #5
  11. Luke #6
  12. Alan #6
  13. Luke #7
  14. Alan #7

 

Giving What We Can and 80,000 Hours are recruiting!

15 lukeprog 25 February 2012 06:34PM

Below is a message from my friends at Giving What We Can and 80,000 Hours, two key organizations in the efficient charity or "optimal philanthropy" movement.

 

Giving What We Can and 80,000 Hours are both taking paid staff from next year. So if you would be interested in working part or full-time next year for either of these two organisations, then please send an e-mail to niel.bowerman@givingwhatwecan.org by the 2nd March, 5pm GMT, with a short description telling us a little bit about yourself. We can then send you further information on how to apply, and on what working with us would involve.

Areas in which we are particularly interested in hiring are:

 

Strategic Research.  Both organisations are highly concerned to know whether their method is the optimal way to make the world a better place; and, if it isn’t, how we can improve it.  We’re looking to hire staff to help us to answer that question.  

If you have strong research skills, have performed well academically, and are sympathetic to the GWWC or 80k way of thinking, then you would fit well into this role. Relevant background subjects include but are not limited to: philosophy, mathematics, economics and the other sciences.

 

Operations and management. In order for the organisations to remain secure and successful, we would need strong support on an operational level.

If you have an eye for detail, and especially if you have previous experience working within operations or management, then you could flourish in this role.

 

Volunteer Recruitment. Both organisations are largely run by volunteers, and are looking to expand significantly next year. We’re especially looking to recruit highly dedicated volunteers who are willing to work 10hrs/week or more.

If you are people-minded, or have experience with volunteer-run organisations previously, then this could be the role for you.

 

Potential employment is not limited to these roles, however, and there would be considerable room for any employee to partially write their own role.  What we are principally looking for are dedicated people who understand and support the GWWC or 80k approach to making the world a better place.

For those who haven't heard of the organisations before, here is a short description:

Giving What We Can is concerned with two primary activities: encouraging people to give more and to give more effectively to causes that fight poverty in the developing world.  Every member of the organisation pledges to give at least 10% of their income to the charities that best fight extreme poverty.  Giving What We Can also does in-depth charity evaluation, and advocates that people give more to the most cost-effective charities.  We've so far raised over $1.5 million to the expectedly best development charities, with over $40,000,000 pledged.

80,000 Hours encourages people to pursue a high-impact ethical career: a career that enables them to do as much to make the world a better place as possible.  The careers it highlights are professional philanthropy – pursuing a lucrative career in order to donate a substantial proportion of one’s earnings to the best causes – and careers in certain research areas or careers through which one can have a large influence over others.  It now has 74 members, and has also received major media coverage. 

A major aim of both organisations is to build the movement of effective altruists: people who take a rational approach to making the world as good a place as possible, and are willing to put that idea into practice.  Between the two organisations, the ultimate cause is not limited to global poverty alleviation. For example, we are doing research into optimal x-risk mitigation strategy, and cost-effectiveness evaluation of x-risk mitigation organisations.

If you were to work for either organisation, you would have considerable flexibility in your work, as part of a young and fast-growing charity.  You’d be working in the company of other highly intelligent and enthusiastic staff, among a community of people doing their best to make a huge positive impact on the world.  It’s an exciting opportunity!

So, even if you’re not sure, but you’re interested in finding out more, please register your interest by e-mailing niel.bowerman@givingwhatwecan.org.

Thanks for your interest,

The GWWC and 80K Teams

 

GiveWell and the Centre for Effective Altruism are recruiting

11 Pablo_Stafforini 19 November 2012 11:53PM

Both GiveWell and the Centre for Effective Altruism (CEA) --an Oxford-based umbrella organization consisting of Giving What We Can, 80,000 Hours, The Life You Can Save, and Effective Animal Activism-- have been discussed here before.  So I thought some folks might want to know that these organizations are recruiting for a number of positions.  Here are relevant excerpts from the official job announcements:

GiveWell: Research Analyst

GiveWell is looking for a Research Analyst to help us evaluate charities, find the most outstanding giving opportunities, and publish our analysis to help donors decide where to give.

 

Effective Animal Activism: Executive Director

Effective Animal Activism is a recently-founded project of 80,000 Hours. It is the world’s first online resource and international community for people who want to reduce animal suffering effectively. We are currently looking for a part-time executive director. Responsibilities will include creating content, managing the community, publicizing the site, and overseeing as well as undertaking further charity research. Future projects include creating a publication on our intervention evaluation once complete, attending conferences, running ad campaigns, and reaching out to the media, animal charities and philanthropists.

 

Giving What We Can: Head of Communications

We are looking for someone to communicate Giving What We Can’s message to the world. As Communications Manager you would be responsible for handling our press relations and guiding our public image.

 

80,000 Hours: Head of Careers Research

We are looking for someone to drive cutting-edge research into effective ethical careers and translate it into one-on-one and online careers advice, which you’ll share with interesting people from all over the world.

 

The Life You Can Save: Director of Outreach (Intern)

We are looking for someone to lead our outreach to pledgers and supporters as well as local groups, other charities, and corporations. In this role, you’ll play a key part in setting our strategic priorities and driving the growth of The Life You Can Save. You’ll be working alongside Peter Singer – one of the most influential ethicists of the 20th century.

 

Centre for Effective Altruism: Head of Fundraising and External Relations

We are looking for someone to manage our fundraising and represent us to other organisations. In this role you would serve all four organisations in the Centre for Effective Altruism: Giving What We Can, 80,000 Hours, The Life You Can Save and Effective Animal Activism.

(Full disclosure: I'm friends with the co-founders of CEA and have donated to Effective Animal Activism.)

 

Transhumanist philosopher David Pearce AMA on Reddit

8 betterthanwell 22 March 2012 06:59PM

Transhumanist philosopher David Pearce co-founded Humanity+ with Nick Bostrom.

He is currently answering questions in an AMA on reddit/r/transhumanism.

 

Richard Dawkins on vivisection: "But can they suffer?"

14 XiXiDu 04 July 2011 04:56PM

The great moral philosopher Jeremy Bentham, founder of utilitarianism, famously said,'The question is not, "Can they reason?" nor, "Can they talk?" but rather, "Can they suffer?" Most people get the point, but they treat human pain as especially worrying because they vaguely think it sort of obvious that a species' ability to suffer must be positively correlated with its intellectual capacity.

[...]

Nevertheless, most of us seem to assume, without question, that the capacity to feel pain is positively correlated with mental dexterity - with the ability to reason, think, reflect and so on. My purpose here is to question that assumption. I see no reason at all why there should be a positive correlation. Pain feels primal, like the ability to see colour or hear sounds. It feels like the sort of sensation you don't need intellect to experience. Feelings carry no weight in science but, at the very least, shouldn't we give the animals the benefit of the doubt?

[...]

I can see a Darwinian reason why there might even be be a negative correlation between intellect and susceptibility to pain. I approach this by asking what, in the Darwinian sense, pain is for. It is a warning not to repeat actions that tend to cause bodily harm. Don't stub your toe again, don't tease a snake or sit on a hornet, don't pick up embers however prettily they glow, be careful not to bite your tongue. Plants have no nervous system capable of learning not to repeat damaging actions, which is why we cut live lettuces without compunction.

It is an interesting question, incidentally, why pain has to be so damned painful. Why not equip the brain with the equivalent of a little red flag, painlessly raised to warn, "Don't do that again"?

[...] my primary question for today: would you expect a positive or a negative correlation between mental ability and ability to feel pain? Most people unthinkingly assume a positive correlation, but why?

Isn't it plausible that a clever species such as our own might need less pain, precisely because we are capable of intelligently working out what is good for us, and what damaging events we should avoid? Isn't it plausible that an unintelligent species might need a massive wallop of pain, to drive home a lesson that we can learn with less powerful inducement?

At very least, I conclude that we have no general reason to think that non-human animals feel pain less acutely than we do, and we should in any case give them the benefit of the doubt. Practices such as branding cattle, castration without anaesthetic, and bullfighting should be treated as morally equivalent to doing the same thing to human beings.

Link: boingboing.net/2011/06/30/richard-dawkins-on-v.html

Imagine a being so vast and powerful that its theory of mind of other entities would itself be a sentient entity. If this entity came across human beings, it might model those people at a level of resolution that every imagination it has of them would itself be conscious.

Just like we do not grant rights to our thoughts, or the bacteria that make up a big part of our body, such an entity might be unable to grant existential rights to its thought processes. Even if they are of an extent that when coming across a human being the mere perception of it would incorporate a human-level simulation.

But even for us humans it might not be possible to account for every being in our ethical conduct. It might not work to grant everything the rights that it does deserve. Nevertheless, the answer can not be to abandon morality altogether. If only for the reason that human nature won't permit this. It is part of our preferences to be compassionate.

Our task must be to free ourselves . . . by widening our circle of compassion to embrace all living creatures and the whole of nature and its beauty.

— Albert Einstein

How do we solve this dilemma? Right now it's relatively easy to handle. There are humans and then there is everything else. But even today — without  uplifted animals, artificial intelligence, human-level simulations, cyborgs, chimeras and posthuman beings — it is increasingly hard to draw the line. For that science is advancing rapidly, allowing us to keep alive people with severe brain injury or save a premature fetus whose mother is already dead. Then there are the mentally disabled and other humans who are not  neurotypical. We are also increasingly becoming aware that many non-human beings on this planet are far more intelligent and cognizant than expected.

And remember, as will be the case in future, it has already been the case in our not too distant past. There was a time when three different human species lived at the same time on the same planet. Three intelligent species of the homo genus, yet very different. Only 22,000 years ago we, H. sapiens, have been sharing this oasis of life with Homo floresiensis and Homo neanderthalensis.

How would we handle such a situation at the present-day? At a time when we still haven't learnt to live together in peace. At a time when we are still killing even our own genus. Most of us are not even ready to become vegetarian in the face of global warming, although livestock farming amounts to 18% of the planet’s greenhouse gas emissions.

So where do we draw the line?

Leave a Line of Retreat

58 Eliezer_Yudkowsky 25 February 2008 11:57PM

"When you surround the enemy
Always allow them an escape route.
They must see that there is
An alternative to death."
        —Sun Tzu, The Art of War, Cloud Hands edition

"Don't raise the pressure, lower the wall."
        —Lois McMaster Bujold, Komarr

Last night I happened to be conversing with a nonrationalist who had somehow wandered into a local rationalists' gathering.  She had just declared (a) her belief in souls and (b) that she didn't believe in cryonics because she believed the soul wouldn't stay with the frozen body.  I asked, "But how do you know that?"  From the confusion that flashed on her face, it was pretty clear that this question had never occurred to her.  I don't say this in a bad way—she seemed like a nice person with absolutely no training in rationality, just like most of the rest of the human species.  I really need to write that book.

Most of the ensuing conversation was on items already covered on Overcoming Bias—if you're really curious about something, you probably can figure out a good way to test it; try to attain accurate beliefs first and then let your emotions flow from that—that sort of thing.  But the conversation reminded me of one notion I haven't covered here yet:

"Make sure," I suggested to her, "that you visualize what the world would be like if there are no souls, and what you would do about that.  Don't think about all the reasons that it can't be that way, just accept it as a premise and then visualize the consequences.  So that you'll think, 'Well, if there are no souls, I can just sign up for cryonics', or 'If there is no God, I can just go on being moral anyway,' rather than it being too horrifying to face.  As a matter of self-respect you should try to believe the truth no matter how uncomfortable it is, like I said before; but as a matter of human nature, it helps to make a belief less uncomfortable, before you try to evaluate the evidence for it."

continue reading »

Group selection update

38 PhilGoetz 01 November 2010 04:51PM

Group selection might seem like an odd topic for a LessWrong post.  Yet a google seach for "group selection" site:lesswrong.com turns up 345 results.

Just the power and generality of the concept of evolution is enough to justify posts on it here.  In addition, the impact group selection could have on the analysis of social structure, government, politics, and the architecture of self-modifying artificial intelligences is hard to over-estimate.  David Sloan Wilson wrote that "group selection is arguably the single most important concept for understanding the nature of politics from an evolutionary perspective."  (You should read his complete article here - it's a much more thorough debunking of the debunking of group selection than this post, although I'm not convinced his interpretation of kin selection is sensible.)  And I will argue that it has particular relevance to the study of rationality.

Eliezer's earlier post The Tragedy of Group Selectionism dismisses group selection, based on a mathematical model by Henry Harpending and Alan Rogers.  That model is, however, fatally flawed:  It studies the fixation of altruistic vs. selfish genes within groups of fixed size.  The groups never go extinct.  But group selection happens when groups are selected against.  The math used to argue against group selection assumes from the outset that group selection does not occur.  (This is also true of Maynard Smith's famous haystack model.)

continue reading »

View more: Next