Comment author: [deleted] 17 November 2013 03:52:25AM 0 points [-]

I have first-degree friends who have worked with 80K and they've said it's unlikely that they would prioritize interviewing me, due to me not directly optimizing for earning-to-give (something which I made clear). I think it's still worth a shot to try and be put in their candidate pool, and I could see if I could get an off-the-record conversation with some of the staff. So we'll see.

Comment author: Benjamin_Todd 27 November 2013 02:33:11PM 2 points [-]

Hi, I'd like to clarify that we prioritise people who are optimising around positive impact, not earning to give. If someone takes earning to give seriously, then we view that as a good indicator, but we speak to lots of people who aren't considering earning to give careers.

I started writing a response, but decided it would be better to summarise my general thoughts on degree choice and post them on our blog. So see our latest thoughts on how to pick a degree.

Insofar as this particular situation goes, I haven't thought about it much, so take this with a pinch of salt. My gut reaction is that CompSci is slightly more impressive than bio engineering, and if it helps you learn to program better, then the skills will be more generally useful. You also say that bio engineering is a major time sink, which I'd see as a count against it. So, my highly uncertain impression is that I'd prefer CompSci. On the other hand, if you'll find it easier and more motivating to study bio engineering and you'll get better grades, then I'd rate that pretty highly (especially if aiming to continue into research).

Comment author: Benjamin_Todd 16 September 2013 06:29:32PM 2 points [-]

FYI: There has been a discussion on 80,000 Hours (started by me) about the value of this project and how to maximise it.

Comment author: John_Maxwell_IV 09 August 2013 11:32:58PM 0 points [-]

By "what kind of applicant field are you looking at?" I mean do you guys have a good number of relatively strong applicants already.

Comment author: Benjamin_Todd 11 August 2013 09:44:40PM 0 points [-]

Hey John, we discuss this quite a bit in the interview (esp 1st and 2nd questions). Happy to take further more specific questions here though.

Comment author: iamesco 11 August 2013 03:05:00PM *  0 points [-]

The Zizek response is absurd. He criticises Zizek for not giving any alternatives to (cultural) capitalism, and yet he clearly has never been in the same room as a Zizek book. How can you expect to know everything that there is to a writer's thought by watching a ten minute video. Reading a chapter of HPMoR doesn't entitle me to sweeping opinions on Eliezer's philosophy.

Edit: Please explain how the Zizek video is "bad/incorrect/flawed/misleanding/incomplete".

Comment author: Benjamin_Todd 11 August 2013 08:34:07PM 1 point [-]

People might prefer this pair:

Peter Buffett and Zizek on why philanthropists do more harm than good and Will MacAskill's response on Qz.com

Comment author: lukeprog 25 April 2013 07:08:04AM *  1 point [-]

FYI, I told the CFAR principals about How to Measure Anything, and specifically about the calibration exercises detailed in chapter 5, on September 9th of last year, at which time Anna said she had previously read the first half of the book.

But yeah, it hasn't been discussed on LW much, though it has been on my recommended books page for a long time.

Comment author: Benjamin_Todd 01 May 2013 05:38:30AM 0 points [-]

Sorry Luke, I didn't want to bother you so didn't ask, but I should have guessed you would have found this :)

[LINK] How to calibrate your confidence intervals

11 Benjamin_Todd 25 April 2013 06:26AM

In the book "How to Measure Anything" D. Hubbard presents a step-by-step method for calibrating your confidence intervals, which he has tested on hundreds of people, showing that it can make 90% of people almost perfect estimators within half a day of training.

I've been told that the Less Wrong and CFAR community is mostly not aware of this work, so given the importance of making good estimates to rationality, I thought it would be of interest.

(although note CFAR has developed its own games for training confidence interval calibration)

The main techniques to employ are:

 

Equivalent bet:

For each estimate imagine that you are betting $1000 on the answer being within your 90% CI. Now compare this to betting $1000 on a spinner where 90% of the time you win and 10% of the time you lose. Would you prefer to take a spin? If so, your range is too small and you need to increase it. If you decide to answer the question your range is too large and you need to reduce it. If you don’t mind whether you answer the question or take a spin then it really is your 90% CI.

Absurdity Test:

Start with an absurdly large range, maybe from minus infinity to plus infinity, and then begin reducing it based upon things you know to be highly unlikely or even impossible.

Avoid Anchoring:

Anchoring occurs when you think of a single answer to the question and then add an error around this answer; this often leads to ranges which are too narrow. Using the absurdity test is a good way to counter problems brought on by anchoring; another is to change how you look at your 90% CI. For a 90% CI there is a 10% chance that the answer lies outside your estimate, and if you split this there is a 5% chance that the answer is above your upper bound and a 5% chance that the answer is below your lower bound. By treating each bound separately, rephrase the question to read ‘is there a 95% chance that the answer is above my lower bound?’. If the answer is no, then you need to increase or decrease the bound as required. You can then repeat this process for the other bound.

Pros and cons:

Identify two pros and two cons for the range that you have given to help clarify your reasons for making this estimate.

Once you have used these techniques you can make another equivalent bet to check whether your new estimate is your 90% CI.

 

 

To train yourself, practice making estimates repeatedly while using these techniques, until you reach 100% accuracy.

To read more and try sample questions, read the article we prepared on 80,000 Hours here.

 

 

 

Comment author: lukeprog 21 November 2012 04:52:07AM *  32 points [-]

Thanks for this.

Another question...

Did those involved with CEA study the literature on human value drift — if so, what did they find? What is CEA's own experience with it?

Examples I've witnessed several times each: Someone plans to do environmental law only but they end up in corporate law. Another person plans to become a professional philanthropist, but then fails to donate later, and instead spends money keeping up with the Joneses. Someone else plans to be a genuine, pleasant person but then they study "pickup artistry" and find that being a manipulative, cocky jerk actually does increase their success with women, and a bit later I discover they're a cocky, manipulative jerk to everyone. (Note to everyone: there are lots of ways to increase one's romantic success without becoming a cocky, manipulative jerk!)

I wish I knew how often this kind of value drift happens. Value drift with regard to professional philanthropy seems to happen a lot in the SI community; maybe it happens less often in communities focused on more "ground-level" causes like poverty reduction? What can be done to prevent it?

Of course, we probably don't want to prevent some kinds of value drift, e.g. value drift that occurs strictly due to encountering new and better information. I used to care a lot about God's will, until I gained information indicating God's non-existence.

Comment author: Benjamin_Todd 23 November 2012 02:45:17PM 10 points [-]

Hi Luke,

This is certainly really important for 80k - it's on our list of strategic considerations to investigate.

We haven't looked into it in depth already, beyond knowledge of some relevant psychology literature (e.g. being primed by images of money has been found to make people more selfish in a couple of (probably dodgy) studies).

We've put a couple of measures in place which seem like they might help to mitigate the types of drift that don't involve updating on new information. First, making a public commitment to make the world a better place in an effective way encourages people not to drift towards being non-altruistic (while is also sufficiently broad not to commit people to moral beliefs they might well want to change e.g. that animal suffering doesn't matter), because people want to be consistent. Second, participating in the 80k community could help to counteract destructive social pressure from workplace communities. It remains to see how well these measures work - we'll be keeping a close eye.

Ben

Comment author: [deleted] 17 August 2012 04:51:10PM 0 points [-]

I could be mistaken and I hope you will correct me of I am wrong. That sounds like equating a measurable outcome with success. Like a company that invested five hundred dollars, made a penny, and called itself profitable. A profit was made, but... no. One net distributed, one life saved, I will not say that's no good at any cost. But some bottom line of failure, of surrender, should be part of the evaluation. Charities that crow the most about 'raising awareness' or prayer are the worst offenders, confusing activity with achievement. They do more than nothing, but... no.

Comment author: Benjamin_Todd 17 August 2012 07:17:46PM 2 points [-]

Givewell is effectively attempting to work out which charities most increase human welfare for dollar. So, a charity 'fails' if it becomes clearly less effective than the next best.

Comment author: Giles 16 August 2012 10:47:07PM 0 points [-]

EA movement building dominating most of the other approaches

Good thing I'm doing that then :-)

On the other hand, my map says that people in the EA movement will say that EA movement building is the bestest thing, people in the SI will say that it's FAI research, etc. etc. Once you've filtered for strategically-minded people, you'd expect them all to already be doing whatever they thought was most effective (though out of the people I have in mind, not everyone is motivated by xrisk reduction, or not exclusively).

Looking forward to what your team has to say on the matter though, definitely.

Comment author: Benjamin_Todd 17 August 2012 02:43:23AM 0 points [-]

Heh almost, but the argument only seems to apply to xrisk. I don't see much reason to think EA movement building is the most effective way to fight global poverty.

Comment author: jsteinhardt 16 August 2012 09:29:08AM 2 points [-]

If the same article would be written by someone else, I would recommend them to ask the same question in the Open Thread. Should I vote differently just because of the person who wrote it?

I would suggest that instead of downvoting until no one can see the post, you explain to them how to make their post better. I'm not even asking you to upvote, I'm just asking you not to hide important content, or if you do, at least constructively help your ally make better content. Even if LessWrong is aware of 80,000 Hours, the staff at 80,000 Hours might not be super-familiar with LessWrong, and so might accidentally violate certain local norms. Punishing them for this, rather than helpfully correcting them, is what seems counterproductive to me. Once you explain the norms, then you should feel totally free to criticize things that don't adhere to them.

So now voting about articles is no longer about their quality, but becomes a political question?

Basically everything you do has political repercussions. Insisting otherwise will probably lead to poor results.

Comment author: Benjamin_Todd 16 August 2012 08:52:50PM 0 points [-]

that sounds about right!

View more: Prev | Next