All of vaishnav92's Comments + Replies

I don't think it's great to tell most people to. keep switching fields based on updated impact calculations. There are advantages to building focussed careers - increasing returns to effort within the same domain. The exception would be founder-types and some generalist type talent. I'm not sure why we start with the premise that EA has to channel people into specific career paths based on impact calculations. It has a distortionary effect on the price of labor. Just as I'd prefer tax dollars being channeled into direct cash payments as welfare, i'd prefer if EAs made as much money as possible and donated it, so they can pay for whoever is best qualified to do what needs to be done. 

I just did. 

I'm not sure I have one that folks within EA would find palatable. The solution, in my mind, is for Effective Altruism to become a movement that mostly focuses and raising and allocating capital - one that uses markets to get things done downstream of that. I think EA should get out of the business of providing subsidized labor to the "most important causes". Instead, allocate capital and use incentives and markets to get what you want. This would mean all movement building efforts focus on earning to give. If you want someone smart to fou... (read more)

1Jonas Hallgren
I guess the solution that you're more generally pointing at here is something like ensuring a split in the incentives of the people within the specific fields and EA itself as a movement. Almost a bit like making that part of EA only be global priorities research and something like market allocation?  I have this feeling that there might be other ways to go about doing this with like programs or incentives for making people be more open to taking any type of impactful job? Something like having reoccuring reflection periods or other types of workshops/programs? 

(1) Some channels, like email, provide strategic ambiguity on whether signalling is conscious or not. 

(2) It's possible to build habits (eg. asking thoughtful, open ended questions, doing more research than the median person would etc) that could eventually become sub-conscious.  

I don't necessarily think "being transactional" is the problem. What i've observed more frequently is a complete lack of awarness of the other party's interests and incentives. (theory of mind). 

I also don't know that conscious signalling is necessarily the problem, it's signalling without attempting to make it a mutually beneficial interaction. 

Paying attention to social capital seems like one risk management mechanism.  I try to ask - what sort of people is this likely to put me in tething alonouch with, and in what way? Will this increase the surface area of people to whom I can showcase my strengths and build relationships with ? I wrote something along these lines here (in the context of evaluating startups as an employee) - https://vaishnavsunil.substack.com/p/from-runway-to-career-capital-a-framework. Would be keen to hear what you think if you end up reading. 

1Jonas Hallgren
Yeah, that was what I was looking for, very nice. It does seem to verify what I was thinking with that you can't really do the same bet strategy as VCs. I do really also appreciate the thoughts in there, they seem like things one should follow, I gotta make sure to do the last due dilligence part of talking to people that have worked with others in the past, it has always felt like a lot but you're right in that one should do it. Also, I'm considering why there isn't some sort of bet pooling network for startup founders where you have like 20 people go together and say that they will all try out ambitious projects and support each other if they fail. It's like startup insurance but from the perspective of people doing startups. Of course you have to trust the others there and stuff but I think this should work?

Thank you! Do you mean risk reduction strategy as in - how do you as an employer mitigate the downside risk of hiring people with less legible credentials ?

3Jonas Hallgren
No sorry, I meant from the perspective of the person with less legible skills.

How much would we have to pay you to move to Congo ?

3Stephen Fowler
Assuming I blend in and speak the local language, within an order of magnitude of 5 million (edit: USD) I don't feel your response meaningfully engaged with either of my objections.

I posted this on the EA forum a couple of weeks ago - https://forum.effectivealtruism.org/posts/7WKiW4fTvJMzJwPsk/adverse-selection-in-minimizing-cost-per-life-saved

No surprise that people on the forum seem to think #4 is the right answer (although they did acknowledge this is a valid consideration). But a lot of it was "this is so cheap that this is probably still the right answer" and "we should be humble and not violate the intuition people have that all lives are equal". 

Yes, unless what donors really want is to think no further than the cost of a DALY.  Sure, GiveWell donors care about "actually having an impact" in that they're doing more than most donors to understand who to best delegate resource allocation to, but how many would actually change their allocation based on this information? I don't really know, but i'm not confident it's a high proportion.  

Agree, this would be a more pertinent to answering this question than what GiveWell has commissioned thus far. I'm meeting someone this weekend who is working on DALYs at the Effective Institutions Project. Will update here if I hear something interesting.

  1. Thanks for the feedback. Thinking about it for a minute, it seems like your first point is more than just stylistic criticism. By "better" i meant we have strong intuitions about first person subjective experience, but i now realize the way I phrased it  might be begging the question. 
  2. Why do you think I'm making that assumption? I assume EAs care about all of these things with some reasonable exchange rate between all the three. Assuming you only care about  doesn't this bias you towards enhaving subjective experience, pain relief etc (eg. G
... (read more)