(Sorry, this is rather long and I fear less clear than I would like.)
Uniformitarianism
We are in agreement that uniformitarianism is a matter of degree and that it's the complexity of the "rules" that matters, rather than of what happens. The most popular formulation of this idea around these parts is "Solomonoff induction": suppose that everything you observe is generated by a computer program, give higher initial probability to shorter programs, and then just do Bayesian inference as new observations come in. Aside from being totally uncomputable in theory and infeasibly expensive in practice and depending (finitely but hugely) on exactly what language you write your programs in and how you encode your observations, this is a really good way to decide what to believe :-).
So you're probably right that you can avoid taking an extra plausibility hit from talking snakes as such, if instead you say something like "around the time of creation, living organisms worked by divine magic rather than biology" or "... living organisms were based around completely different biology". That sort of proposition generally incurs a really big cost in plausibility, for two reasons.
If you're aiming for a theory that says "before, the rules were X; after, the rules were Y" then the problem is that now your program needs to contain both sets of rules. If you're never intending to go beyond "before, the rules were different; after, the rules were Y" then the problem is now your program needs to describe what happened "before" without the compression enabled by having those rules -- this is the "a witch did it" problem.
(How big a problem the latter is depends on how much you observe actually depends on what happened "before".)
It seems to me that "biology used to be completely different, in such a way as to make talking snakes not a problem" is obviously no improvement on "there was a (naturally) talking snake". And I think it is, actually, worse than just "biology used to be completely different" -- when a single rule change has to explain multiple anomalies, the more specific anomalies it has to explain the more constrained the rule change is.
Parallels
OK, so the idea is that you want to find a specific prophetic point of reference for the "he shall crush your head" thing (because obviously the idea that it might refer to people actually killing snakes is completely ridiculous, I guess) and that has to be Jesus because everything has to be Jesus[1], and then the only animate thing whose head Jesus can reasonably be said to have crushed is the devil. But, again, it doesn't seem to me that there's anything internal to the story calling forth such an interpretation, and I'd have thought there's an obvious completely straightforward way to understand the bit about crushing snakes' heads (especially as it comes right alongside something about snakes biting people's feet, which also seems like a fine example of something that doesn't require overinterpretation) -- so, again, this seems like something being imposed on the text from outside, and therefore not a good explanation for why (some) Christians take the snake to be / be controlled by the devil.
[1] Sunday school teacher: "OK, children, can you tell me what's small and brown and furry, with a big fluffy tail, and really likes nuts?" Child: "I'm sure it's Jesus, but it sounds a lot like a squirrel."
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Your approach is wrong, and I don't know how it went wrong. (I assume the problem is deeper than "bringer change" being unknown to Google.) If you know what "Kolmogorov complexity" means, maybe think about how you would program a simulated world that allows such a change to be "fundamental" and yet produces the evidence that scientists continually find.
On the much less important issue at hand: you seem to have skipped the question of why this God would take legs away from any "snake," and precisely what that entails. (Should I ask how many Chinese dragons or "seeds" thereof were affected? Or would that distract from the why?)
This is one problems with the absurdity heuristic. Because of deliberately starting at a point with such a long inferential distance, It can be hard to see where the error has taken place.