xamdam comments on Book Club Update and Chapter 1 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (79)
Speaking of Chapter 1, it seems essential to point out another point that may be unclear on superficial reading.
The author introduces the notion of a reasoning "robot" that maintains a consistent set of "plausibility" values (probabilities) according to a small set of rules.
To a modern reader, it may make the impression that the author here suggests some practical algorithm or implementation of some artificial intelligence that uses Bayesian inference as a reasoning process.
I think, this misses the point completely. First: it is clear that maintaining such a system of probability values even for a set of simply Boolean formulas (consistently!) amounts to solving SAT problems and therefore computationally infeasible in general.
Rather, the author's purpose of introducing the "robot" was to avoid the misconception that plausibility desiderata are some subjective, inaccurate notions that depend on some hidden features of the human mind. So by detaching the inference rule from the human mind and using a idealized "robot", the author wants to argue that these axioms and their consequences can and should be studied mathematically and independently from all other features and aspects of human thinking and rationality.
So here the objective was not to build some intelligence, rather study an abstract and computationally unconstrained version of intelligence obeying the above principles alone.
Such an AI will never be realized in practice (due to inherent complexity limitations, and here I don't just speak about P!=NP !), Still, if we can prove what this theoretical AI will have to do in certain specific situations, then we can learn important lessons about the above principles, or even guide our decisions by that insights we gained from that study.
I agree that Jaynes is using the robot as a literary device to get a point across.
If I understood you correctly it seems you're sneaking an additional claim that a Bayesian AI is theoretically impossible due to computational concerns. That should be discussed separately, but the obvious counterargument is that while, say, complete inference in Bayes Nets has been proved intractable, approximate inference does well on good-size problems, and approximate does not mean it's not Bayesian.
Sorry, I never tried to imply that an AI built on the Bayesian principles is impossible or even a bad idea. (Probably, using Bayesian inference is a fundamentally good idea.)
I just tried to point out that easy looking principles don't necessarily translate to practical implementations in a straightforward manner.