Yes on re-reading I see what you are saying.
Yes, thanks, and the standard mathematical description of the change in frequency of alleles over generations is given in the form of a Bayesian update where the likelihood is the ratio of reproductive fitness of the particular allele to the average reproductive fitness of all competing alleles at that locus.
What a wonderful post!
I find it intellectually exhilarating as I have not been introduced to Solomonoff before and his work may be very informative for my studies. I have come at inference from quite a different direction and I am hopeful that an appreciation of Solomonoff will broaden my scope.
One thing that puzzles me is the assertion that:
Therefore an algorithm that is one bit longer is half as likely to be the true algorithm. Notice that this intuitively fits Occam's razor; a hypothesis that is 8 bits long is much more likely than a hypothesis that is 34 bits long. Why bother with extra bits? We’d need evidence to show that they were necessary.
First my understanding of the principle of maximum entropy suggests that prior probabilities are constrained only by evidence and not by the length of the hypothesis test algorithm. In fact Jaynes argues that 'Ockham's' razor is already built into Bayesian inference.
Second given that the probability is reduced by half with every bit of added algorithm length wouldn't that imply that algorithms' having 1 bit were the most likely and have a probability of 1/2. In fact I doubt if any algorithm at all is describable with 1 bit. Some comments as well as the body of the article suggest that the real accomplishment of Solomonoff's approach is to provide the set of all possible algorithms/hypothesis and that the probabilities assigned to each are not part of a probability distribution but rather are for the purposes of ranking. Why do they need to be ranked? Why not assign them all probability 1/N where N = 2^(n+1) - 2, the number of algorithms having length up to and including length n.
Clearly I am missing something important.
Could it be that ranking them by length is for the purpose of determining the sequence in which the possible hypothesis should be evaluated? When ranking hypothesis by length and then evaluating them against the evidence in sequence from shorter to longer our search will stop at the shortest possible algorithm, which by Occam's razor is the preferred algorithm.
Excellent post.
I have pondered the same sort of questions. Here is an excerpt from my 2009 book.
My father is 88 years old and a devout Christian. Before he became afflicted with Alzheimer’s he expected to have an afterlife where he would be reunited with his deceased daughter and other departed loved ones. He doesn’t talk of this now and would not be able to comprehend the question if asked. He is now almost totally unaware of who he is or what his life was. I sometimes tell him the story of his life, details of what he did in his working life, stories of his friends, the adventures he undertook. Sometimes these accounts stir distant memories. I have recently come to understand that there is more of ‘him’ alive in me then there is in him. When he dies and were he to enter the afterlife in his present state and be reunited with my sister he would not recognize or remember her. Would he be restored to some state earlier in his life? Would he be the same person at all?
I originally wrote this to illustrate problems with the religious idea of resurrection. I now believe that this problem of identity is common to all complex evolving systems including 'ourselves'. For example species evolve over their lifetime and although we intuitively know that we are identifying something distinct when we name a species such as homo-sapiens the exact nature of the distinction is slippery. The debate in biology over the definition of species has been long, heated and unresolved. Some definition referring to species are attempts along the line of interbreeding populations that do not overlap with other populations. However this is a leaky definition. For example it has recently been found that modern human populations contain some Neanderthal DNA. Our 'species' interbred in the past, should we still be considered separate species?
The 'irreducible complexity' argument advocated by the intelligent design community often cites the specific example of the eye. It is argued that an eye is a complex organ with many different individual parts that all must work together perfectly and that this implies it could not have been gradually built out of small gradual random changes.
This argument has been around a long time but it has been well answered within the scientific literature and the vast majority of biologist consider the issue settled.
Dawkins' book 'Climbing mount improbable' provides a summary of the science for the lay reader and uses the eye as a detailed example.
Darwin was the first to explain how the the eye could have evolved via natural selection. I quote the wikipedia article:
Charles Darwin himself wrote in his Origin of Species, that the evolution of the eye by natural selection at first glance >seemed "absurd in the highest possible degree". However, he went on to explain that despite the difficulty in imagining it, >this was perfectly feasible:
...if numerous gradations from a simple and imperfect eye to one complex and perfect can be shown to exist, each >grade being useful to its possessor, as is certainly the case; if further, the eye ever varies and the variations be inherited, >as is likewise certainly the case and if such variations should be useful to any animal under changing conditions of life, >then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by >our imagination, should not be considered as subversive of the theory.
The argument of 'irreducible complexity' has been around since Darwin first proposed natural selection and it has been conclusively answered within the scientific literature (for a good summary see the Wikipedia article). Those who believe that all life was created by God cannot believe the scientific explanation. In my view the real problem is that they tend to argue that they have superior scientific evidence which proves that the scientific consensus is wrong. In other words the intelligent design community argues they are scientifically superior to the science community. This reduces their position to a undignified one of deception or perhaps even fraud.
I was also inspired by one of Dawkins' books suggesting something similar. It was some years ago but I believe Dawkins suggested writing a type of computer script which would mimic natural selection. I wrote a script and was quite surprised at the power it demonstrated.
As I remember the general idea is that you can type in any string of characters you like and then click the 'evolve' button. The computer program then:
1) generates and displays a string of random characters of the same length as the entered string.
2) compares the new string with the displayed string and retains all characters that are the same and in the same position.
3) generates random characters in the string where they did not match in 2 and displays the full string.
4) If the string in 3 matches the string entered by the computer the program stops otherwise it goes to step 2.
The rapidity with which this program converges on the one entered it quite surprising.
This simulation is somewhat different from natural selection especially in that the selection rules are hard coded but I think it does demonstrate the power of random changes to converge when there is strong selection pressure.
A fascinating aid in demonstrating natural selection was built by Darwin's cousin Francis Galton in 1877. A illustration and description can be found here. The amazing thing about this device is that, as described in the article, it has been re-discovered and re-purpose to illustrate the process of Bayesian inference.
I have come to consider this isomorphism between Bayesian inference and natural selection or Darwinian processes in general as a deep insight into the workings of nature. I view natural selection as a method of physically performing Bayesian inference, specifically as a method for inferring means for reproductive success. My paper on this subject may be found here
I agree with your statement:
if we require 100% justified confidence to consider something knowledge, no one knows or can know a single thing.
However I think your are misunderstanding me.
I don't think we require 100% justified confidence for there to be knowledge I believe knowledge is always a probability and that scientific knowledge is always something less than 100%.
I suggest that knowledge is justified belief but it is always a probability less than 100%. As I wrote: I mean justified in the Bayesian sense which assigns a probability to a state of knowledge. The correct probability to assign may be calculated with the Bayesian update.
This is a common Bayesian interpretation. As Jaynes wrote:
In our terminology, a probability is something that we assign, in order to represent a state of knowledge.
You misunderstand me. I did not say it was
'known' the theory was true.
I reject the notion that any scientific theory can be known to be 100% true, I stated:
Perhaps those scientist from the past should have said it had a high probability of being true.
As we all know now Newton's theory of gravitation is not 100% true and therefore in a logical sense it is not true at all. We have counter examples as in the shift of Mercury's perihelion which it does not predict. However the theory is still a source of knowledge, it was used by NASA to get men to the moon.
Perhaps considering knowledge as an all or none characteristic is unhelpful.
If we accept that a theory must be true or certain in order to contain knowledge it seems to me that no scientific theory can contain knowledge. All scientific theories are falsifiable and therefore uncertain.
I also consider it hubris to think we might ever develop a 'true' scientific theory as I believe the complexities of reality are far beyond what we can now imagine. I expect however that we will continue to accumulate knowledge along the way.
I would be interested if you would care to elaborate a little.Syllogisms have been a mainstay of philosophy for over two millennium and undoubtedly I have a lot to learn about them.
In my admittedly limited understanding of syllogisms the conclusion is true given the premises being true. Truth is more in the structure of the argument than in its conclusion. If Socrates is not mortal than either he is not a man or not all men are mortal.
Comments