As a full-blown Bayesian, I feel that the bayesian approach is *almost* perfect. It was a revelation when I first realized that instead of having this big frequentist toolbox of heuristics, one can simply assume that every involved entity is a random variable. Then everything is solved! But then pretty quickly I came to the catch, namely that to be able to do anything, the probability distributions must be parameterized. And then you start to wonder what the pdf's of the parameters should be, and off we go into infinite regress.
But the biggest catch is of course that the integral for the posterior is almost never solvable. If that wasn't the case, I believe we would have had superhuman AI a long time ago. Still, I think bayesian methods are underexploited in AI. For example, it is straight-forward to make a "curious" system that asks the user all the things it is uncertain of, in a way that minimizes the need for human input (My lab is currently working on such a system for auditory testing).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
My initial reaction was "I wish I wouldn't have known about this", because it made me physically shuddered. After the shock and disgust, I forced myself to accept the proposition "There is a company selling bleach as medicine, and people are ingesting it". I am now happy I have seen this, because my model of the world is more accurate, and if I act on my values in accordance with more accurate beliefs, I will be able to do more good.