Your argument assumes that the algorithm and the prisons have access to the same data. This need not be the case - in particular, if a prison bribes a judge to over-convict, the algorithm will be (incorrectly) relying on said conviction as data, skewing the predicted recidivism measure.
That said, the perverse incentive you mentioned is absolutely in play as well.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Okay, a summary of my attitude towards EA is that EA rationally follows from a set of weird premises that are not shared by most people and certainly not by me. I do not have any desire to maximize utility in a way that considers utility for every human being equally. I prefer increasing utility for myself, my family, friends, countrymen, and people like me. Every time I pay for electricity for my computer rather than sending the money to a third world peasant is, according to EA, a failure to maximize utility.
Also, I believe that most cases of EA producing very counterintuitive results are just examples of cases where the weirdness of EA becomes obvious.
I'm sad that people still think EAers endorse such a naive and short-time-horizon type of optimizing utility. It would obviously not optimize any reasonable utility function over a reasonable timeframe for you to stop paying for electricity for your computer.
More generally, I think most EAers have a much more sophisticated understanding of their values, and the psychology of optimizing them, than you give them credit for. As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. Instead, most people allocate a "charity budget" periodically and make sure they feel ok about both the charity budget and the amount they spend on themselves. Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.