All of Benjamin Hilton's Comments + Replies

[x-posted from EA forum]

Hi Remmelt,

Thanks for sharing your concerns, both with us privately and here on the forum. These are tricky issues and we expect people to disagree about how to about how to weigh all the considerations — so it’s really good to have open conversations about them.

Ultimately, we disagree with you that it's net harmful to do technical safety research at AGI labs. In fact, we think it can be the best career step for some of our readers to work in labs, even in non-safety roles. That’s the core reason why we list these roles on our job b... (read more)

6Remmelt
[cross-posted replies from EA Forum] Ben, it is very questionable that 80k is promoting non-safety roles at AGI labs as 'career steps'.  Consider that your model of this situation may be wrong (account for model error).  * The upside is that you enabled some people to skill up and gain connections.  * The downside is that you are literally helping AGI labs to scale commercially (as well as indirectly supporting capability research).     I did read that compilation of advice, and responded to that in an email (16 May 2023): "Dear [a], People will drop in and look at job profiles without reading your other materials on the website. I'd suggest just writing a do-your-research cautionary line about OpenAI and Anthropic in the job descriptions itself. Also suggest reviewing whether to trust advice on whether to take jobs that contribute to capability research. * Particularly advice by nerdy researchers paid/funded by corporate tech.  * Particularly by computer-minded researchers who might not be aware of the limitations of developing complicated control mechanisms to contain complex machine-environment feedback loops.  Totally up to you of course. Warm regards, Remmelt"   This is what the article says:  "All that said, we think it’s crucial to take an enormous amount of care before working at an organisation that might be a huge force for harm. Overall, it’s complicated to assess whether it’s good to work at a leading AI lab — and it’ll vary from person to person, and role to role."  So you are saying that people are making a decision about working for an AGI lab that might be (or actually is) a huge force for harm. And that whether it's good (or bad) to work at an AGI lab depends on the person – ie. people need to figure this out for them personally. Yet you are openly advertising various jobs at AGI labs on the job board. People are clicking through and applying. Do you know how many read your article beforehand? ~ ~ ~ Even if they did read thro
8yanni kyriacos
Hi Benjamin - would be interested in your take on a couple of things: 1. By recommending people work at big labs, do you think this has a positive Halo Effect for the labs' brand? I.e. 80k is known for wanting people to do good in the world, so by recommending people invest their careers at a lab, then those positive brand associations get passed onto the lab (this is how most brand partnerships work). 2. If you think the answer to #1 is Yes, then do you believe the cost of this Halo Effect is outweighed by the benefit of having safety minded EA / Rationalist folk inside big labs?