vaishnav92

Wiki Contributions

Comments

Sorted by

I don't think it's great to tell most people to. keep switching fields based on updated impact calculations. There are advantages to building focussed careers - increasing returns to effort within the same domain. The exception would be founder-types and some generalist type talent. I'm not sure why we start with the premise that EA has to channel people into specific career paths based on impact calculations. It has a distortionary effect on the price of labor. Just as I'd prefer tax dollars being channeled into direct cash payments as welfare, i'd prefer if EAs made as much money as possible and donated it, so they can pay for whoever is best qualified to do what needs to be done. 

I just did. 

I'm not sure I have one that folks within EA would find palatable. The solution, in my mind, is for Effective Altruism to become a movement that mostly focuses and raising and allocating capital - one that uses markets to get things done downstream of that. I think EA should get out of the business of providing subsidized labor to the "most important causes". Instead, allocate capital and use incentives and markets to get what you want. This would mean all movement building efforts focus on earning to give. If you want someone smart to found a charity, pay to incentivize that. 

One response I anticipate from EAs is that ambitious projects often require teams that have high trust (or in EA parlance - value aligned) since impact can't often be tracked purely through metrics and incentives. I'm not sure I buy this. It's true that corporations, at the highest level, have something far more legible that the leadership team can optimize for.  But at each lower level of hierarchy, corporations also face the same problems of Goodharting and incentives alignment. They don't always make the best decisions but good companies do manage to do this well enough at most levels to get important thigns done. What makes me even more suspicious is that people don't even want to try this. 

(1) Some channels, like email, provide strategic ambiguity on whether signalling is conscious or not. 

(2) It's possible to build habits (eg. asking thoughtful, open ended questions, doing more research than the median person would etc) that could eventually become sub-conscious.  

I don't necessarily think "being transactional" is the problem. What i've observed more frequently is a complete lack of awarness of the other party's interests and incentives. (theory of mind). 

I also don't know that conscious signalling is necessarily the problem, it's signalling without attempting to make it a mutually beneficial interaction. 

Paying attention to social capital seems like one risk management mechanism.  I try to ask - what sort of people is this likely to put me in tething alonouch with, and in what way? Will this increase the surface area of people to whom I can showcase my strengths and build relationships with ? I wrote something along these lines here (in the context of evaluating startups as an employee) - https://vaishnavsunil.substack.com/p/from-runway-to-career-capital-a-framework. Would be keen to hear what you think if you end up reading. 

Thank you! Do you mean risk reduction strategy as in - how do you as an employer mitigate the downside risk of hiring people with less legible credentials ?

How much would we have to pay you to move to Congo ?

I posted this on the EA forum a couple of weeks ago - https://forum.effectivealtruism.org/posts/7WKiW4fTvJMzJwPsk/adverse-selection-in-minimizing-cost-per-life-saved

No surprise that people on the forum seem to think #4 is the right answer (although they did acknowledge this is a valid consideration). But a lot of it was "this is so cheap that this is probably still the right answer" and "we should be humble and not violate the intuition people have that all lives are equal". 

Yes, unless what donors really want is to think no further than the cost of a DALY.  Sure, GiveWell donors care about "actually having an impact" in that they're doing more than most donors to understand who to best delegate resource allocation to, but how many would actually change their allocation based on this information? I don't really know, but i'm not confident it's a high proportion.  

Agree, this would be a more pertinent to answering this question than what GiveWell has commissioned thus far. I'm meeting someone this weekend who is working on DALYs at the Effective Institutions Project. Will update here if I hear something interesting.

  1. Thanks for the feedback. Thinking about it for a minute, it seems like your first point is more than just stylistic criticism. By "better" i meant we have strong intuitions about first person subjective experience, but i now realize the way I phrased it  might be begging the question. 
  2. Why do you think I'm making that assumption? I assume EAs care about all of these things with some reasonable exchange rate between all the three. Assuming you only care about  doesn't this bias you towards enhaving subjective experience, pain relief etc (eg. Give Directly,  Strong Minds etc) versus life saving interventions that might be barely net positive anyway, especially because things like malarial bed nets don't have other positive externalities (unlike something like deworming) I agree it's also an update towards any other things EAs could plausibly do, such as institutional imrprovements/human capital development etc.
Load More