If I can introduce a problem domain that doesn't get a lot of play in these communities but (I think) should:
End-of-life healthcare in the US seems like a huge problem (in terms of cost, honored preferences, and quality of life for many people) that's relatively tractable for its size. The balance probably falls in favor of making things happen rather than researching technical questions, but I'm hoping it still belongs here.
There's a recent IOM report that covers the presently bleak state of affairs and potential ways forward pretty thoroughly. One major problem is that doctors don't know their patients' care preferences, resulting in a bias towards acute care over palliative care, which in turn leads to unpleasant (and expensive) final years. There are a lot of different levers in actual care practices, advanced care planning, professional education/development, insurance policies, and public education. I might start with the key findings and recommendations (PDF) and think about where to go from there. There's also Atul Gawande's recent book Being Mortal, which I've yet to read but people seem excited about. Maybe look at what organizations like MyDirectives and Goals of Care are doing.
This domain probably has a relative advantage in belief- or value-alignment for people who think widely available anti-aging is far in the future or undesirable, although I'm tempted to argue that in a world with normalized life extension, the norms surrounding end-of-life care become even more important. The problem might also be unusually salient from some utilitarian perspectives. And while I've never been sure what civilizational inadequacy means, people interested in it might be easier to sell on fixing end-of-life care.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I like this post. I lean towards skepticism about the usefulness of calibration or even accuracy, but I'm glad to find myself mostly in agreement here.
For lots of practical (to me) situations, a little bit of uncertainty goes a long way concerning how I actually decide what to do. It doesn't really matter how much uncertainty, or how well I can estimate the uncertainty. It's better for me to just be generally humble and make contingency plans. It's also easy to imagine that being well-calibrated (or knowing that you are) could actually demolish biases that are actually protective against bad outcomes, if you're not careful. If you are careful, sure, there are possible benefits, but they seem modest.
But making and testing predictions seems more than modestly useful, whether or not you get better (or better calibrated) over time. I find I learn better (testing effect!) and I'm more likely to notice surprising things. And it's an easy way to lampshade certain thoughts/decisions so that I put more effort into them. Basically, this:
To be more concrete, a while back I actually ran a self-experiment on quantitative calibration for time-tracking/planning (your point #1). The idea was to get a baseline by making and resolving predictions without any feedback for a few weeks (i.e. I didn't know how well I was doing--I also made predictions in batches so I usually couldn't remember them and thus target my prediction "deadlines"). Then I'd start looking at calibration curves and so on to see if feedback might improve predictions (in general or in particular domains). It turned out after the first stage that I was already well-calibrated enough that I wouldn't be able to measure any interesting changes without an impractical number of predictions, but while it lasted I got a moderate boost in productivity just from knowing I had a clock ticking, plus more effective planning from the way predictions forced me to think about contingencies. (I stopped the experiment because it was tedious, but I upped the frequency of predictions I make habitually.)