Comment author: Fluttershy 20 November 2014 05:14:33AM 1 point [-]

Um, hello there-- thank you for posting this! Would it be okay if I posted some constructive criticisms of 80,000 Hours here? I wanted to ask before posting because I didn't know if you would mind, and I wanted to assure you before posting anything negative-seeming that I wouldn't intend any criticism to be taken as being a veiled insult.

Comment author: RobertWiblin 20 November 2014 12:11:38PM 1 point [-]

Hey, this doesn't seem like the best location for it. Is there a post on the 80,000 Hours or EA blogs related to your criticism you could use?

The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach

10 RobertWiblin 19 November 2014 10:41PM

The Centre for Effective Altruism, the group behind 80,000 Hours, Giving What We Can, the Global Priorities Project, Effective Altruism Outreach, and to a lesser extent The Life You Can Save and Animal Charity Evaluators, is looking to grow its team with a number of new roles:

We are so keen to find great people that if you introduce us to someone new who we end up hiring, we will pay you $1,000 for the favour! If you know anyone awesome who would be a good fit for us please let me know: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org. They can also book a short meeting with me directly.

We may be able to sponsor outstanding applicants from the USA.

Applications close Friday 5th December 2014.

Why is CEA an excellent place to work? 

First and foremost, “making the world a better place” is our bottom line and central aim. We work on the projects we do because we think they’re the best way for us to make a contribution. But there’s more.

What are we looking for?

The specifics of what we are looking for depend on the role and details can be found in the job descriptions. In general, we're looking for people who have many of the following traits:

  • Self-motivated, hard-working, and independent;
  • Able to deal with pressure and unfamiliar problems;
  • Have a strong desire for personal development;
  • Able to quickly master complex, abstract ideas, and solve problems;
  • Able to communicate clearly and persuasively in writing and in person;
  • Comfortable working in a team and quick to get on with new people;
  • Able to lead a team and manage a complex project;
  • Keen to work with a young team in a startup environment;
  • Deeply interested in making the world a better place in an effective way, using evidence and research;
  • A good understanding of the aims of the Centre for Effective Altruism and its constituent organisations.

I hope to work at CEA in the future. What should I do now?

Of course this will depend on the role, but generally good ideas include:

  • Study hard, including gaining useful knowledge and skills outside of the classroom.
  • Degrees we have found provide useful training include: philosophy, statistics, economics, mathematics and physics. However, we are hoping to hire people from a more diverse range of academic and practical backgrounds in the future. In particular, we hope to find new members of the team who have worked in operations, or creative industries.
  • Write regularly and consider starting a blog.
  • Manage student and workplace clubs or societies.
  • Work on exciting projects in your spare time.
  • Found a start-up business or non-profit, or join someone else early in the life of a new project.
  • Gain impressive professional experience in established organisations, such as those working in consulting, government, politics, advocacy, law, think-tanks, movement building, journalism, etc.
  • Get experience promoting effective altruist ideas online, or to people you already know.
  • Use 80,000 Hours' research to do a detailed analysis of your own future career plans.
In response to 2013 Survey Results
Comment author: RobertWiblin 24 March 2014 06:19:53PM 0 points [-]

"Finally, at the end of the survey I had a question offering respondents a chance to cooperate (raising the value of a potential monetary prize to be given out by raffle to a random respondent) or defect (decreasing the value of the prize, but increasing their own chance of winning the raffle). 73% of effective altruists cooperated compared to 70% of others - an insignificant difference."

Assuming an EA thinks they will use the money better than the typical other winner, the most altruistic thing to do could be to increase their chances of winning, even at the cost of a lower prize. Or maybe they like the person putting up the prize, in which case they would prefer it to be smaller.

Comment author: SaidAchmiz 15 June 2013 02:44:27AM 0 points [-]

Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it's OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.

That has very little to do with whether Eliezer should make public declarations of things. Are you of the opinion that Eliezer does not share your view on this matter? (I don't know whether he does, personally.) If so, you should be attempting to convince him, I guess. If you think that he already agrees with you, your work is done. Public declarations would only be signaling, having little to do with maximizing good outcomes.

As for the other thing — I should think the fact that we're having some disagreement in the comments on this very post, about whether animal suffering is important, would be evidence that it's not quite as uncontroversial as you imply. I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one. Perhaps you should write one? I'd be interested in reading it.

Comment author: RobertWiblin 15 June 2013 11:07:27AM *  2 points [-]

"Public declarations would only be signaling, having little to do with maximizing good outcomes."

On the contrary, trying to influence other people in the AI community to share Eliezer's (apparent) concern for the suffering of animals is very important, for the reason given by David.

"I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one."

a) Less Wrong doesn't contain the best content on this topic. b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them. c) The reason has been given by Pablo Stafforini - when I directly experience the badness of suffering, I don't only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering). d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.

Comment author: SaidAchmiz 15 June 2013 12:16:04AM -1 points [-]

I have to disagree on two points:

  1. I don't think that we should take this thesis ("suffering (and pleasure) are important where-ever they occur, whether in humans or mice") to be well-established and uncontroversial, even among the transhumanist/singularitarian/lesswrongian crowd.

  2. More importantly, I don't think Eliezer or people like him have any obligation to "lead the way", set examples, or be a role model, except insofar as it's necessary for him to display certain positive character traits in order for people to e.g. donate to MIRI, work for MIRI, etc. (For the record, I think Eliezer already does this; he seems, as near as I can tell, to be a pretty decent and honest guy.) It's really not necessary for him to make any public declarations or demonstrations; let's not encourage signaling for signaling's sake.

Comment author: RobertWiblin 15 June 2013 01:57:58AM 8 points [-]

Needless to say, I think 1 is settled. As for the second point - Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it's OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.

Comment author: davidpearce 13 June 2013 12:16:32PM 23 points [-]

Eliezer, is that the right way to do the maths? If a high-status opinion-former publicly signals that he's quitting meat because it's ethically indefensible, then others are more likely to follow suit - and the chain-reaction continues. For sure, studies purportedly showing longer lifespans, higher IQs etc of vegetarians aren't very impressive because there are too many possible confounding variables. But what such studies surely do illustrate is that any health-benefits of meat-eating vs vegetarianism, if they exist, must be exceedingly subtle. Either way, practising friendliness towards cognitively humble lifeforms might not strike AI researchers as an urgent challenge now. But isn't the task of ensuring that precisely such an outcome ensues from a hypothetical Intelligence Explosion right at the heart of MIRI's mission - as I understand it at any rate?

Comment author: RobertWiblin 14 June 2013 11:29:06PM 6 points [-]

I think David is right. It is important that people who may have a big influence on the values of the future lead the way by publicly declaring and demonstrating that suffering (and pleasure) are important where-ever they occur, whether in humans or mice.

Comment author: peter_hurford 13 June 2013 06:33:10AM *  3 points [-]

This is something I've considered a lot, though chicken also dominate the calculations along with fish. I'm not currently sure if I value welfare in proportion to neuron count, though I might. I'd have to sort that out first.

A question at this point I might ask is how good does the final estimate have to be? If AMF can add about 30 years of healthy human life for $2000 by averting malaria and a human is worth 40x that of a chicken, then we'd need to pay less than $1.67 to avert a year of suffering for a chicken (assuming averting a year of suffering is the same as adding a year of healthy life, which is a messy assumption).

Comment author: RobertWiblin 14 June 2013 11:24:06PM 6 points [-]

I think some weighting for the sophistication of a brain is appropriate, but I think the weighting should be sub-linear w.r.t. the number of neurones; I expect that in simpler organisms, a larger share of the brain will be dedicated to processing sensory data and generating experiences. I would love someone to look into this to check if I'm right.

Comment author: RobertWiblin 09 April 2012 04:59:52AM 0 points [-]

Thanks for doing this. I found it a very memorable when it first aired years ago.

Comment author: RobertWiblin 09 April 2012 02:44:20AM 0 points [-]

To Larks and Shminux - I am twisting the idea of arbitrage, to be more like 'economic profit' or 'being sure to beat the market rate of return on investment/altruism'. Maybe I should stop using the term arbitrage.

"Isn't your point basically just that consumer surplus can be unusually high for individuals with unusual demand functions because the supply (of chances to do good) is fixed so lower demand => lower price?"

Yes, though the supply surve just slopes upwards - it isn't vertical.

I could re-write the principle as 'when supply curves slope upwards the purchases that offer the highest consumer surplus to you will mostly be things that you value but others don't.' On financial markets that isn't so important as most investors have very similar values, but in other areas it matters more.

I like your point about feedback loops in finance, but shouldn't proven effective philanthropists attract more donations if people cared about efficacy?

The principle of ‘altruistic arbitrage’

18 RobertWiblin 09 April 2012 01:29AM

Cross-posted from http://www.robertwiblin.com

There is a principle in finance that obvious and guaranteed ways to make a lot of money, so called ‘arbitrages’, should not exist. It has a simple rationale. If market prices made it possible to trade assets around and in the process make a guaranteed profit, people would do it, in so doing shifting some prices up and others down. They would only stop making these trades once the prices had adjusted and the opportunity to make money had disappeared. While opportunities to make ‘free money’ appear all the time, they are quickly noticed and the behaviour of traders eliminates them. The logic of selfishness and competition mean the only remaining ways to make big money should involve risk taking, luck and hard work. This is the ’no arbitrage‘ principle.

Should a similar principle exist for selfless as well as selfish finance? When a guaranteed opportunity to do a lot of good for the world appears, philanthropists should notice and pounce on it, and only stop shifting resources into that activity once the opportunity has been exhausted. This wouldn’t work as quickly as the elimination of arbitrage on financial markets of course. Rather it would look more like entrepreneurs searching for and exploiting opportunities to open new and profitable businesses. Still, in general competition to do good should make it challenging for an altruistic start-up or budding young philanthropist to beat existing charities at their own game.

There is a very important difference though. Most investors are looking to make money and so for them a dollar is a dollar, whatever business activity it comes from. Competition between investors makes opportunities to get those dollars hard to find. The same is not true of altruists, who have very diverse preferences about who is most deserving of help and how we should help them; a ‘util’ from one charitable activity is not the same as a ‘util’ from another. This suggests that unlike in finance, we may able to find ‘altruistic arbitrages’, that is to say ‘opportunities to do a lot of good for the world that others have left unexploited.’

The rule is simple: target groups you care about that other people mostly don’t, and take advantage of strategies other people are biased against using.  That rule is the root of a lot of advice offered to thoughtful givers and consequentialist-oriented folks. An obvious example is that you shouldn’t look to help poor people in rich countries. There are already a lot of government and private dollars chasing opportunities to assist them, so the low hanging fruit has all been used up and then some. The better value opportunities are going to be in poor, unromantic places you have never heard of, where fewer competing philanthropist dollars are directed. Similarly, you should think about taking high risk-high return strategies. Most do-gooders are searching for guaranteed and respectable opportunities to do a bit of good, rather than peculiar long-shot opportunities to do a lot of good. If you only care about the ‘expected‘ return to your charity, then you can do more by taking advantage of the quirky, improbable bets neglected by others.

Who do I personally care about more than others? For me the main candidates are animals, especially wild ones, and people who don’t yet exist and may never exist – interest groups that go largely ignored by the majority of humanity. What are the risky strategies I can employ to help these groups? Working on future technologies most people think are farcical naturally jumps to mind but I’m sure there are others and would love to hear them.

This principle is the main reason I am skeptical of mainstream political activism as a way to improve the world. If you are part of a significant worldwide movement, it’s unlikely that you’re working in a neglected area and exploiting how your altruistic preferences are distinct from those of others.

What other conclusions can we draw thinking about philanthropy in this way?

 

View more: Prev | Next