Manfred comments on Open Thread: September 2011 - Less Wrong

5 Post author: Pavitra 03 September 2011 07:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (441)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 21 September 2011 11:48:45AM 1 point [-]

I wouldn't call occam's razor an explicit part of reductionism. It's basically equivalent to saying you can't just make up information.

Comment author: JoshuaZ 22 September 2011 12:22:38AM 1 point [-]

I wouldn't call occam's razor an explicit part of reductionism. It's basically equivalent to saying you can't just make up information.

I don't think so. This may be the case when your hypotheses are something like "A" and "A v B" but if your hypotheses you are comparing are "A" and "C ^ D ^ E" this sort of summary of Occam's razor seems to be insufficient.

Comment author: Manfred 22 September 2011 01:16:24AM 0 points [-]

If both hypotheses explain some set of data, I've usually been able to make a direct comparison even in what look like tough cases by following the information in the data - what sort of process generates it, etc. Keeping things in terms of the "language" of the data is in fact also justified by the idea that pulling information from nowhere is bad.

This sort of reliance on our observations is certainly an empiricist assumption, but I don't think a reductionist one.

Comment author: JoshuaZ 22 September 2011 02:23:49AM 3 points [-]

Consider the following problem. You know that there is some some property that some integers have and others don't and you are trying to figure out what the property is. After testing every integer under 10^4, you find that there are 1229 integers under 10^4 that work. You have two hypotheses that describe these. One is that they are every prime number. The other is a given by a 1228 degree polynomial where P(n) gives the nth number in your set. One of these is clearly simpler. This isn't just a language issue- if I tried to right these out in any reasonable equivalent of a Turing machine or programming language one of them will be a much shorter program. The distinction here however is not just one of one of them making up information. One is genuinely shorter.

If one wants we can give similar historical examples. In 1620 you could make a Copernican model of the solar system that would rival Kepler's model in accuracy. But you would need a massive number of epicycles. The problem here doesn't seem to be pulling information from nowhere. The problem seems to be that one of the hypotheses is simpler in a different way.

Both of these examples do have something in common which is that in both of the complicated examples there are a lot of parameters that are observationally dependent whereas the other has many fewer of those. But that seems to be a distinct issue (although it is possibly a good very rough way of measuring complexity of hypotheses).