a necessary derivation of that universal physics is a vast quantity of space and time which we can not directly observe
This isn't quite right. The only thing that makes the derivation "necessary" is your adjective "universal." We could just as easily say that there is a supergalactic physics that explains all we can observe,
Physics is generally held to be universal, instead of just 'supergalactic'. For one, there is the multiverse. But in general, the idea is, as I discuss later, to find the most parsimonious explanation for everything. This is the optimal strategy, and universality is a necessary consequence of this strategy. Any other physics or system which does not explain all observations is of course incomplete and inferior.
It would be remarkably bad science to voluntarily choose to sample only one kilobyte from one address out of thousands of terabytes of data and assume that the kilobyte is representative.
Not at all. You seem to be applying the analogy that at the cosmic scale the universe is some sort of probabilistic urn that generates galactic-sized space-time slices at random whim. It is not.
There are an infinite set of potential physics that have widely different properties in regions we can not observe. There are strong reasons why these are all necessarily inferior, by the principle of Ockham's razor and the low-complexity bias in Solonomoff induction.
given any sequence of finite observations O, there is an infinite set of algorithms A that perfectly predict/compute the sequence O.
Right, but who says our observations are finite?
Elementary physics. There are a finite number of humans, the earth has finite mass, finite information storage potential, and we have finite knowledge.
What if important phenomenon, like, e.g., consciousness (cough) turn out to depend on infinitely small particles?
If you want to believe something like this is true before you begin, that consciousness is somehow different and special, then you are abandoning rationality from the start.
There are no privileged hypothesizes and no predefined targets in the quest for knowledge.
What if the fate of the universe in a cosmological sense turns out to depend on what happens over infinitely long periods of time? There is no rule that I know of that says that the Universe is not allowed to clog its equations with infinities.
Sure, infinities are possible, although they generally are viewed to signal a problem in physics when they come up in one's math.
But that's all besides the point: our observations are obviously finite. And furthermore, infinities are not at all an obstacle towards a universal physics.
Physics is concerned largely with finding the minimally complex algorithm that fully predicts O.
A noble goal, but who says that sufficient simplicity to allow for computability is possible?
There is no such complexity limit whatsoever to computability - it is not as if a phenomena has to be sufficiently 'simple' for it to be computable in theory (although practical computability is a more complex issue).
Suppose our universe contains some true randomness beyond its initial seeding?
True randomness comes up immediately in quantum mechanics. This isn't an obstacle to computability, whether theoretical or practical. People unfamiliar with computing often have the notion that it must be deterministic. This is not so. Computation can be nondeterministic and randomness is an optimal strategy in many algorithms.
Beyond that, the randomness in quantum mechanics is typically squashed by the central limit theorem; a vast quantity of non-deterministic quantum events become increasingly deterministic at the macro scale.
while the universe is computable in principle, we cannot perfectly compute even a portion of our universe from the inside?
This is true - we can't perfectly compute very much of our universe from within it, but perfect computation is highly overrated, and regardless this has little bearing on whatever original track we once were on.
I don't mean to suggest that it's implausible that everything is governed by a universal physics
It is trivially true, tautological - it is implied by the very meaning of universal physics.
It sounds to me that you have a mystery (consciousness) that you would like to protect.
I just get frustrated when people assert, without evidence that's apparent to me, that physics will surely explain everything that we might wish to know.
This also is trivially true, and is the main point I have been attempting to communicate. Anything that you could possibly want to know can be explained by some model. This fact doesn't require much evidence at all.
If there is some new series of observations that physical science can truly not explain, then it is physical science which changes until it does explain them.
OK, thank you for talking with me.
I've lost interest in the conversation, partly because of your minor ad hominem attack ("sounds to me like you have a mystery that you would like to protect"), but mostly because I see your arguments as dependent on assumptions that I do not share: you see it as "obvious" that physics is universal and that theories favored by Solonomoff simplicity are automatically and lexically superior to all other theories, and I do not.
If you care to defend or explain these assumptions, I might regain interest, or I might not. Proceed at your own risk of wasting your time.
In any case, thank you for a stimulating debate.
This post comprises one question and no answers. You have been warned.
I was reading "How minds can be computational systems", by William Rapaport, and something caught my attention. He wrote,
Rapaport was talking about cognition, not consciousness. The contention between these hypothesis is, however, only interesting if you are talking about consciousness; if you're talking about "cognition", it's just a choice between two different ways to define cognition.
When it comes to consciousness, I consider myself a computationalist. But I hadn't realized before that my explanation of consciousness as computational "works" by jumping back and forth between those two incompatible positions. Each one provides part of what I need; but each, on its own, seems impossible to me; and they are probably mutually exclusive.
Option 1: Consciousness is computed
If consciousness is computed, then there are no necessary dynamics. All that matters is getting the right output. It doesn't matter what algorithm you use to get that output, or what physical machinery you use to compute it. In the real world, it matters how fast you compute it; but surely you can provide a simulated world at the right speed for your slow or fast algorithm. In humans today, the output is not produced all at once - but from a computationalist perspective, that isn't important. I know "emergence" is wonderful, but it's still Turing-computable. Whatever a "correct" sequence of inputs and outputs is, even if they overlap in time, you can summarize the inputs over time in a single static representation, and the outputs in a static representation.
So what is conscious, in this view? Well, the algorithm doesn't matter - remember, we're not asking for O(consciousness); we're saying that consciousness is computed, and therefore is the output of a computation. The machine doing the computing is one step further removed than the algorithm, so it's certainly not eligible as the seat of consciousness; it can be replaced by an infinite number of computationally-equivalent different substrates.
Whatever it is that's conscious, you can compute it and represent it in a static form. The simplest interpretation is that the output itself is conscious. So this leads to the conclusion that, if a Turing machine computes consciousness and summarizes its output in a static representation on a tape, the tape is conscious. Or the information on the tape, or - whatever it is that's conscious, it is a static thing, not a living, dynamic thing. If computation is an output, process doesn't matter. Time doesn't enter into it.
The only way out of this is to claim that an output that, when coming out of a dynamic real-time system, is conscious, becomes unconscious when it's converted into a static representation, even if the two representations contain exactly the same information. (X and Y have the same information if an observer can translate X into Y, and Y into X. The requirement for an observer may be problematic here.) This strikes me as not being computationalist at all. Computationalism means considering two computational outputs equivalent if they contain the same information, whether they're computed with neurons and represented as membrane potentials, or computed with Tinkertoys and represented by rotations of a set of wheels. Is the syntactic transformation from a dynamic to a static representation a greater qualitative change than the transformation from tinkertoys to neurons? I don't think so.
Option 2: Consciousness is computation
If consciousness is computation, then we have the satisfying feeling that how we do those computations matters. But then we're not computationalists anymore!
A computational analysis will never say that one algorithm for producing a series of outputs produces an extra computational effect (consciousness) that another method does not. If it's not output, or internal representational state, it doesn't count. There are no other "by-products of computation". If you use a context-sensitive grammar to match a regular expression, it doesn't make the answer more special than if you used a regular grammar.
Don't protest that a human talks and walks and thereby produces side-effects during the computation. That is not a computational analysis. A computational analysis will give the same result if you translate whatever the algorithm and machine running it is, onto tape in a Turing machine. Anything that gives a different result is not a computational analysis. If these side-effects don't show up on the tape, it's because you forgot to represent them.
An analysis of the actual computation process, as opposed to its output, could be a thermodynamic analysis, which would care about things like how many bits the algorithm erased internally. I find it hard to believe that consciousness is a particular pattern of entropy production or waste heat. Or it could be a complexity or runtime analysis, that cared about how long it took. A complexity analysis has a categorical output; there's no such thing as a function being "a little bit recursively enumerable", as I believe there is with consciousness. So I'd be surprised if "conscious" is a property of an algorithm in the same way that "recursively enumerable" is. A runtime analysis can give more quantitative answers, but I'm pretty sure you can't become conscious by increasing your runtime. (Otherwise, Windows Vista would be conscious.)
Option 3: Consciousness is the result of quantum effects in microtubules
Just kidding. Option 3 is left as an exercise for the reader, because I'm stuck. I think a promising angle to pursue would be the necessity of an external observer to interpret the "conscious tape". Perhaps a conscious computational device is one that observes itself and provides its own semantics. I don't understand how any process can do that; but a static representation clearly can't.
ADDED
Many people are replying by saying, "Obviously, option 2 is correct," then listing arguments for, without addressing the problems with option 2. That's cheating.