Brian Bien

Wiki Contributions

Comments

Sorted by

Updated to include: "Taxation immediately reduces the frequency of payouts" section toward the end.

Thank you for the feedback; I've updated the post for another attempt on improved clarity with your concerns in mind. I think the optimal tax amount could be empirically determined. The use of the word "sin" carries baggage that could be avoided without its use; there is no retributive intent or even need to match the extent of negative externality; rather, there is just some theoretical tax rate and increase timeline that would reduce net harm. An empirical approach could explore various rate increases and their effects, using various proxies of both payment and compliance to estimate how the market is impacted.

Do you think sufficiently stiff penalties for non-reporting (in proportion to the payment amount, perhaps) might address this?

Is the argument roughly, "some will evade taxes, so the policy will not work as well, and therefore is not worth implementing?"

Right, the proposal offers no initial benefit to the next victims immediately after its implementation. Also, I agree that the inelasticity of the market for ransomware would lead to increased initial burden on the next victim, due to higher initial total payments (ransom + tax) prior to adaptation. Indeed, at any point there is a tax increase, the immediately-following victims would pay more, so perhaps a slow raising of the tax rate would be best. One assumption I made is that the attackers are already demanding their utility-maximizing amount. Since this ransom would decrease with time as all actors become aware of the existence of this tax, the benefit would be realized by the downstream effects of less funding of ransomware, and the would-be victims of the future are the real intended beneficiaries.

(End of post updated for clarity on this)

> The fact that you choose an algorithm does not effect its performance, and you don't have to worry about Causal Goodhart. 

But now, I think you have to worry about a "Regressional Goodhart" 

Maybe this would be pedantic to point out, but your choice of the best-performing model on test data is likely to have done that well by chance, as the number of models evaluated increases (hence validation and test splits).