Comment author: pragmatist 19 November 2013 05:37:47PM *  1 point [-]

Wait, did you interpret my comment as supporting the "irreducible complexity" argument? My whole point was that it is a bad argument. I was criticizing the Hangman analogy because it seems to invite the same sort of mistake that the "irreducible complexity" people make.

Comment author: jockocampbell 20 November 2013 11:59:21PM 1 point [-]

Yes on re-reading I see what you are saying.

Comment author: satt 20 November 2013 12:28:07AM 1 point [-]

I have come to consider this isomorphism between Bayesian inference and natural selection or Darwinian processes in general as a deep insight into the workings of nature.

You might like this. ("In fact, I realized, Bayes's rule just is the discrete-time replicator equation, with different hypotheses being so many different replicators, and the fitness function being the conditional likelihood. ")

Comment author: jockocampbell 20 November 2013 11:56:25PM *  0 points [-]

Yes, thanks, and the standard mathematical description of the change in frequency of alleles over generations is given in the form of a Bayesian update where the likelihood is the ratio of reproductive fitness of the particular allele to the average reproductive fitness of all competing alleles at that locus.

Comment author: pragmatist 19 November 2013 05:37:47PM *  1 point [-]

Wait, did you interpret my comment as supporting the "irreducible complexity" argument? My whole point was that it is a bad argument. I was criticizing the Hangman analogy because it seems to invite the same sort of mistake that the "irreducible complexity" people make.

Comment author: jockocampbell 20 November 2013 11:50:25PM 0 points [-]

Yes, on re-reading I see your point.

Comment author: jockocampbell 20 November 2013 08:03:27PM *  0 points [-]

What a wonderful post!

I find it intellectually exhilarating as I have not been introduced to Solomonoff before and his work may be very informative for my studies. I have come at inference from quite a different direction and I am hopeful that an appreciation of Solomonoff will broaden my scope.

One thing that puzzles me is the assertion that:

Therefore an algorithm that is one bit longer is half as likely to be the true algorithm. Notice that this intuitively fits Occam's razor; a hypothesis that is 8 bits long is much more likely than a hypothesis that is 34 bits long. Why bother with extra bits? We’d need evidence to show that they were necessary.

First my understanding of the principle of maximum entropy suggests that prior probabilities are constrained only by evidence and not by the length of the hypothesis test algorithm. In fact Jaynes argues that 'Ockham's' razor is already built into Bayesian inference.

Second given that the probability is reduced by half with every bit of added algorithm length wouldn't that imply that algorithms' having 1 bit were the most likely and have a probability of 1/2. In fact I doubt if any algorithm at all is describable with 1 bit. Some comments as well as the body of the article suggest that the real accomplishment of Solomonoff's approach is to provide the set of all possible algorithms/hypothesis and that the probabilities assigned to each are not part of a probability distribution but rather are for the purposes of ranking. Why do they need to be ranked? Why not assign them all probability 1/N where N = 2^(n+1) - 2, the number of algorithms having length up to and including length n.

Clearly I am missing something important.

Could it be that ranking them by length is for the purpose of determining the sequence in which the possible hypothesis should be evaluated? When ranking hypothesis by length and then evaluating them against the evidence in sequence from shorter to longer our search will stop at the shortest possible algorithm, which by Occam's razor is the preferred algorithm.

Comment author: jockocampbell 19 November 2013 06:00:43PM *  1 point [-]

Excellent post.

I have pondered the same sort of questions. Here is an excerpt from my 2009 book.

My father is 88 years old and a devout Christian. Before he became afflicted with Alzheimer’s he expected to have an afterlife where he would be reunited with his deceased daughter and other departed loved ones. He doesn’t talk of this now and would not be able to comprehend the question if asked. He is now almost totally unaware of who he is or what his life was. I sometimes tell him the story of his life, details of what he did in his working life, stories of his friends, the adventures he undertook. Sometimes these accounts stir distant memories. I have recently come to understand that there is more of ‘him’ alive in me then there is in him. When he dies and were he to enter the afterlife in his present state and be reunited with my sister he would not recognize or remember her. Would he be restored to some state earlier in his life? Would he be the same person at all?

I originally wrote this to illustrate problems with the religious idea of resurrection. I now believe that this problem of identity is common to all complex evolving systems including 'ourselves'. For example species evolve over their lifetime and although we intuitively know that we are identifying something distinct when we name a species such as homo-sapiens the exact nature of the distinction is slippery. The debate in biology over the definition of species has been long, heated and unresolved. Some definition referring to species are attempts along the line of interbreeding populations that do not overlap with other populations. However this is a leaky definition. For example it has recently been found that modern human populations contain some Neanderthal DNA. Our 'species' interbred in the past, should we still be considered separate species?

Comment author: pragmatist 05 October 2012 02:03:38PM *  2 points [-]

I only have a layperson's knowledge of evolutionary biology, so my criticisms might miss some important subtlety, but it seems to that your analogy is significantly misleading in a couple of ways. It does convey the idea that random guesses with incremental feedback is a better search strategy than if the feedback were holistic (e.g. if you were guessing whole words and the only feedback were whether the guess is correct or not). In so far as someone's worry about natural selection is that they're mistaking it for the latter sort of search, the analogy may be helpful. But if you want to convey something more specific about how natural selection works, then I'm afraid the analogy isn't all that great.

One drawback of the analogy is in the nature of the environmental feedback. In Hangman, a letter gets fixed if (and only if) it is part of the correct answer. In genuine natural selection, though, a mutation doesn't get fixed because it is part of a complex set of mutations that collectively confer some phenotypic benefit. The environment isn't forward-looking like that; it doesn't say "This mutation is part of what is needed for optimality, so I'm going to hold onto it for that reason." Each individual mutation, in order to get fixed in the population, must confer some immediate reproductive benefit. Merely being one element of some complex group of mutations that is collectively beneficial is insufficient. The hangman analogy doesn't capture this aspect of natural selection.

This actually leads the analogy to kind of play into the hands of "irreducible complexity" critiques of natural selection. The proponents of such critiques presume that the individual parts of some complex adaptation only benefit the organism to the extent that they are part of that complex adaptation, and hence one cannot explain their selection without supposing that there is some forward-looking element to selection which holds onto those individual changes just because they will eventually contribute to a complex adaptation. This forward-looking aspect is then offered as evidence of intelligent design.

Another big drawback is that the analogy doesn't capture the competitive nature of natural selection. Natural selection occurs in populations, and requires both variation in traits among individuals in the population and competition for resources among those individuals. The Hangman analogy suggests that the environment already has a fixed template for the ideal phenotype and that it punishes organisms (or genes) individually for failing to approach this ideal and rewards them for getting closer to the ideal. If you have a population, and things worked in the Hangman way, there would be no correlation between rewards and punishments. But that's not how natural selection works. Genes are rewarded for contributing to their vehicles (organisms) being more reproductively successful than other organisms in the population. A reward just consists in reproducing more than your competitors, and a punishment just consists in reproducing less, so rewards and punishments are correlated. One allele can't get rewarded without another one getting punished.

Comment author: jockocampbell 19 November 2013 05:20:49PM *  0 points [-]

The 'irreducible complexity' argument advocated by the intelligent design community often cites the specific example of the eye. It is argued that an eye is a complex organ with many different individual parts that all must work together perfectly and that this implies it could not have been gradually built out of small gradual random changes.

This argument has been around a long time but it has been well answered within the scientific literature and the vast majority of biologist consider the issue settled.

Dawkins' book 'Climbing mount improbable' provides a summary of the science for the lay reader and uses the eye as a detailed example.

Darwin was the first to explain how the the eye could have evolved via natural selection. I quote the wikipedia article:

Charles Darwin himself wrote in his Origin of Species, that the evolution of the eye by natural selection at first glance >seemed "absurd in the highest possible degree". However, he went on to explain that despite the difficulty in imagining it, >this was perfectly feasible:

...if numerous gradations from a simple and imperfect eye to one complex and perfect can be shown to exist, each >grade being useful to its possessor, as is certainly the case; if further, the eye ever varies and the variations be inherited, >as is likewise certainly the case and if such variations should be useful to any animal under changing conditions of life, >then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by >our imagination, should not be considered as subversive of the theory.

The argument of 'irreducible complexity' has been around since Darwin first proposed natural selection and it has been conclusively answered within the scientific literature (for a good summary see the Wikipedia article). Those who believe that all life was created by God cannot believe the scientific explanation. In my view the real problem is that they tend to argue that they have superior scientific evidence which proves that the scientific consensus is wrong. In other words the intelligent design community argues they are scientifically superior to the science community. This reduces their position to a undignified one of deception or perhaps even fraud.

Comment author: jockocampbell 19 November 2013 04:19:17PM *  0 points [-]

I was also inspired by one of Dawkins' books suggesting something similar. It was some years ago but I believe Dawkins suggested writing a type of computer script which would mimic natural selection. I wrote a script and was quite surprised at the power it demonstrated.

As I remember the general idea is that you can type in any string of characters you like and then click the 'evolve' button. The computer program then:

1) generates and displays a string of random characters of the same length as the entered string.

2) compares the new string with the displayed string and retains all characters that are the same and in the same position.

3) generates random characters in the string where they did not match in 2 and displays the full string.

4) If the string in 3 matches the string entered by the computer the program stops otherwise it goes to step 2.

The rapidity with which this program converges on the one entered it quite surprising.

This simulation is somewhat different from natural selection especially in that the selection rules are hard coded but I think it does demonstrate the power of random changes to converge when there is strong selection pressure.

A fascinating aid in demonstrating natural selection was built by Darwin's cousin Francis Galton in 1877. A illustration and description can be found here. The amazing thing about this device is that, as described in the article, it has been re-discovered and re-purpose to illustrate the process of Bayesian inference.

I have come to consider this isomorphism between Bayesian inference and natural selection or Darwinian processes in general as a deep insight into the workings of nature. I view natural selection as a method of physically performing Bayesian inference, specifically as a method for inferring means for reproductive success. My paper on this subject may be found here

Comment author: VAuroch 18 November 2013 08:50:34PM -2 points [-]

No, Newton's theory of gravitation does not provide knowledge. Belief in it is no longer justified; it contradicts the evidence now available.

However, prior to relativity, the existing evidence justified belief in Newton's theory. Whether or not it justified 100% confidence is irrelevant; if we require 100% justified confidence to consider something knowledge, no one knows or can know a single thing.

So, using the definition you gave, physicists everywhere (except one patent office in Switzerland) knew Newton's theory to be true, because the belief "Newton's theory is accurate" was justified. However, we now know it to be false.

Currently, we have a different theory of gravity. Belief in it is justified by the evidence. By your standard, we know it to be true. That's patently ridiculous, however, since physicists still seek to expand or disprove it.

Comment author: jockocampbell 18 November 2013 09:21:20PM 1 point [-]

I agree with your statement:

if we require 100% justified confidence to consider something knowledge, no one knows or can know a single thing.

However I think your are misunderstanding me.

I don't think we require 100% justified confidence for there to be knowledge I believe knowledge is always a probability and that scientific knowledge is always something less than 100%.

I suggest that knowledge is justified belief but it is always a probability less than 100%. As I wrote: I mean justified in the Bayesian sense which assigns a probability to a state of knowledge. The correct probability to assign may be calculated with the Bayesian update.

This is a common Bayesian interpretation. As Jaynes wrote:

In our terminology, a probability is something that we assign, in order to represent a state of knowledge.

Comment author: VAuroch 18 November 2013 09:38:03AM -1 points [-]

You tried to define knowledge as simply 'justified belief'. The example scientific theory was believed to be true, and that belief was justified by the evidence then available. But, as we now know, that belief was false. By your definition, however, they can still be said to have 'known' the theory was true.

That is the problem with the definition not including the 'true' caveat.

Comment author: jockocampbell 18 November 2013 08:24:49PM *  0 points [-]

You misunderstand me. I did not say it was

'known' the theory was true.

I reject the notion that any scientific theory can be known to be 100% true, I stated:

Perhaps those scientist from the past should have said it had a high probability of being true.

As we all know now Newton's theory of gravitation is not 100% true and therefore in a logical sense it is not true at all. We have counter examples as in the shift of Mercury's perihelion which it does not predict. However the theory is still a source of knowledge, it was used by NASA to get men to the moon.

Perhaps considering knowledge as an all or none characteristic is unhelpful.

If we accept that a theory must be true or certain in order to contain knowledge it seems to me that no scientific theory can contain knowledge. All scientific theories are falsifiable and therefore uncertain.

I also consider it hubris to think we might ever develop a 'true' scientific theory as I believe the complexities of reality are far beyond what we can now imagine. I expect however that we will continue to accumulate knowledge along the way.

Comment author: PhilGoetz 17 November 2013 07:44:28PM 3 points [-]

The "Socrates is mortal" one is a good example because nowadays its conclusion has a probability of less than one.

Comment author: jockocampbell 17 November 2013 08:04:29PM 0 points [-]

I would be interested if you would care to elaborate a little.Syllogisms have been a mainstay of philosophy for over two millennium and undoubtedly I have a lot to learn about them.

In my admittedly limited understanding of syllogisms the conclusion is true given the premises being true. Truth is more in the structure of the argument than in its conclusion. If Socrates is not mortal than either he is not a man or not all men are mortal.

View more: Next