I strongly disagree with this take. (Link goes to a post of mine in the effective altruism forum.) Though the main point is that if you were paid for services, that is not being "helped", that's not much different from being a plumber who worked on the FTX building.
I should note that I would less advocate small EA grantees whose projects would likely fail to give back the money, and instead more advocate a sort of collective responsibility around it. I don't think I've received money from FTX, but as I said in the twitter thread, I would probably donate to an EA fund for the victims of FTX. I should probably have made the suggested policy clearer, rather than hiding it behind a link to twitter.
So, a shady character approaches you and offers a deal that would make you $1K richer. You know that worst case scenario if you accept and they get caught is that you will just have to return the $1K, so you have no incentive to refuse to deal with them. No, you need something like 3x damages to properly disincentivise people from turning a blind eye.
The case I had in mind was philantropy, where the $1000 would go to some charitable project. This creates a more difficult situation if you have to hand it back, because you've presumably spent it, and not on something that makes you personally stronger. So you are suddenly faced with the potentially-diffocult task of scraping together money to pay back.
I do agree that optimal incentives would weight things inversely to the probability of them getting caught. I'm not sure how necessary perfectly optimal incentives are, since telling your community's other donors that one of the donors turned out to be a bad actor and now we need a bunch of their money to pay back the bad actor's victims seems like it would suck.
A shady character donated $1K to starving children in Africa, using me as an intermediary. Now I need to make sure those children will return 3× the amount.
Suppose you are cooperating with someone. It seems like there would be good reason to keep an eye on your partner to make sure that your partner does not do very bad things. For example, here are some reasons to keep an eye on your partner:
But how vigilant should you be about keeping an eye on your partners? And who should you keep an eye on?
Here's one proposal: To the extent that your partner helps you, and it then turned out that their help was funded by bad things, you should try to help the partner's victims as much as your partner helped you. For instance if your partner stole $10000 from a bank and then gave you $1000, you should return this $1000 to the bank.
This seems to me to give you nice certain incentives. For instance, it neatly defines which people you must keep an eye on, and gives you proportional incentives to keep an eye on them based on how entangled they are in your organizations. It also directly links diligence to cooperation, so you have a logical reason to give to your partners for why you want extra checks when they suggest helping you. And it also seems like it would help your reputation in case something genuinely does go wrong.
(I'm not sure if it should be 1:1 exactly. An argument for giving more than 1:1 back is that helping others later is likely worse than not harming them in the first place, and also you might not always catch bad actors so paying back more would counterbalance this incentives-wise. An argument for giving less than 1:1 back is that otherwise you would have much greater likelihood of not being able to cover the cost. Probably any significant self-inflicted cost of bad partners would massively improve the incentives.)
Inspired by discussion on twitter.