Morendil comments on Book Club Update and Chapter 1 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (79)
Speaking of Chapter 1, it seems essential to point out another point that may be unclear on superficial reading.
The author introduces the notion of a reasoning "robot" that maintains a consistent set of "plausibility" values (probabilities) according to a small set of rules.
To a modern reader, it may make the impression that the author here suggests some practical algorithm or implementation of some artificial intelligence that uses Bayesian inference as a reasoning process.
I think, this misses the point completely. First: it is clear that maintaining such a system of probability values even for a set of simply Boolean formulas (consistently!) amounts to solving SAT problems and therefore computationally infeasible in general.
Rather, the author's purpose of introducing the "robot" was to avoid the misconception that plausibility desiderata are some subjective, inaccurate notions that depend on some hidden features of the human mind. So by detaching the inference rule from the human mind and using a idealized "robot", the author wants to argue that these axioms and their consequences can and should be studied mathematically and independently from all other features and aspects of human thinking and rationality.
So here the objective was not to build some intelligence, rather study an abstract and computationally unconstrained version of intelligence obeying the above principles alone.
Such an AI will never be realized in practice (due to inherent complexity limitations, and here I don't just speak about P!=NP !), Still, if we can prove what this theoretical AI will have to do in certain specific situations, then we can learn important lessons about the above principles, or even guide our decisions by that insights we gained from that study.
What then do you make of Jayne's observation in the Comments: "Our present model of the robot is quite literally real, because today it is almost universally true that any nontrivial probability evaluation is performed by a computer"?
In my reading it means, that there are already actual implementations for all probability inference operations that the authors consider in the book.
This has been probably a true statement, even in the 60'ies. It does not mean that the robot as a whole is resource-wise feasible.
An analogy: It is not hard to implement all (non-probabilistic) logical derivation rules. It is also straightforward to use them to generate all true mathematical theorems (e.g. within ZFC). However this does not imply that we have an practical (i.e. efficient) general purpose mathematical theorem-prover. It gives an algorithm to prove every provable theorems eventually, but its run-time consumption makes this approach practically useless.