Eliezer, Your post is entirely consistent with what I said to Robin in my comments on "Morality Is Overrated": Morality is a means, not an end.
For a more sophisticated theory of analogical reasoning, you should read Dedre Gentner's papers. A good starting point is The structure-mapping engine: Algorithm and examples. Gentner defines a hierarchy of attributes (properties of entities; in logic, predicates with single arguments, P(X)), first-order relations (relations between entities; in logic, predicates with two or more arguments, R(X,Y)), and higher-order relations (relations between relations). Her experiments with children show that they begin reasoning with attributional similarity (what you call "surface similarities"); as they mature, they make increasing use of first-order relational similarity (what you call "structural similarity"); finally, they begin using higher-order relations, especially causal relations. This fits perfectly with your description of your own childhood. See Language and the career of similarity.
I discuss the hero worship of great scientists in The Heroic Theory of Scientific Development and I discuss genius in Genius, Sustained Effort, and Passion.
I think this line of reasoning can be taken even further: Everything is relations; attributes are an illusion.
Bayesianism has its uses, but it is not the final answer. It is itself the product of a more fundamental process: evolution. Science, technology, language, and culture are all governed by evolution. I believe that this gives much deeper insight into science and knowledge than Bayesianism. See:
(1) Multiple Discovery: The Pattern of Scientific Progress, Lamb and Easton (2) Without Miracles: Universal Selection Theory and the Second Darwinian Revolution, Cziko (3) Darwin's Dangerous Idea: Evolution and the Meanings of Life, Dennett (4) The Evolution of Technology, Basalla
Scientific method itself evolves. Bayesianism is part of that evolution, but only a small part.
I agree with your general view, but I came to the same view by a more conventional route: I got a PhD in philosophy of science. If you study philosophy of science, you soon find that nobody really knows what science is. The "Science" you describe is essentially Popper's view of science, which has been extensively criticized and revised by later philosophers. For example, how can you falsify a theory? You need a fact (an "observation") that conflicts with the theory. But what is a fact, if not a true mini-theory? And how can you know that it is true, if theories can be falsified, but not proven? I studied philosophy because I was looking for a rational foundation for understanding the world; something like what Descartes promised with "cogito ergo sum". I soon learned that there is no such foundation. Making a rational model of the world is not like making a home, where the first step is to build a solid foundation. It is more like trying to patch a hole in a sinking ship, where you don't have the luxury of starting from scratch. I view science as an evolutionary process. Changes must be made in small increments: "Natura non facit saltus".
One flaw I see in your post is that the rule "You cannot trust any rule" applies recursively to itself. (Anything you can do, I can do meta.) I would say "Doubt everything, but one at a time, not all at once."
(The other part of the "experimental evidence" comes from statisticians / computer scientists / Artificial Intelligence researchers, testing which definitions of "simplicity" let them construct computer programs that do empirically well at predicting future data from past data. Probably the Minimum Message Length paradigm has proven most productive here, because it is a very adaptable way to think about real-world problems.)
I once believed that simplicity is the key to induction (it was the topic of my PhD thesis), but I no longer believe this. I think most researchers in machine learning have come to the same conclusion. Here are some problems with the idea that simplicity is a guide to truth:
(1) Solomonoff/Gold/Chaitin complexity is not computable in any reasonable amount of time.
(2) The Minimum Message Length depends entirely on how a situation is represented. Different representations lead to radically different MML complexity measures. This is a general problem with any attempt to measure simplicity. How do you justify your choice of representation? For any two hypotheses, A and B, it is possible to find a representation X such that complexity(A) < complexity(B) and another representation Y such that complexity(A) > complexity(B).
(3) Simplicity is merely one type of bias. The No Free Lunch theorems show that there is no a prior reason to prefer one type of bias over another. Therefore there is nothing special about a bias towards simplicity. A bias towards complexity is equally valid a priori.
http://www.jair.org/papers/paper228.html http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization http://en.wikipedia.org/wiki/Inductive_bias
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
See: Good, Evil, Morality, and Ethics: "What would it mean to want to be moral (to do the moral thing) purely for the sake of morality itself, rather than for the sake of something else? What could this possibly mean to a scientific materialistic atheist? What is this abstract, independent, pure morality? Where does it come from? How can we know it? I think we must conclude that morality is a means, not an end in itself."