I'm doing an undergraduate course on the Free Will Theorem, with three lecturers: a mathematician, a physicist, and David Chalmers as the philosopher. The course is a bit pointless, but the company is brilliant. Chalmers is a pretty smart guy. He studied computer science and math as an undergraduate, before "discovering that he could get paid for doing the kind of thinking he was doing for free already". He's friendly; I've been chatting with him after the classes.
So if anyone has any questions for him, if they seem interesting enough I could approach him with them.
Emails to him also work, of course, but discussion in person lets more understanding happen faster. For example, in a short discussion with him I understood his position on consciousness way better than I would have just from reading his papers on the topic.
That would be like asking for a mathematical description of the problem "why is there something rather than nothing?"
One way in which people lose their sensitivity to such questions is that they train themselves to turn every problem into something that can be solved by their favorite formalized methods. So if it can't be turned into a program, or a Bayesian formula, or..., it's deemed to be meaningless.
But every formalism starts life as an ontology. Before we "formalized" logic or arithmetic, we related to the ontological content of those topics: truth, reasoning, numbers... The quest for a technical statement of philosophical hard problems often amounts to an evasion of a real ontological problem that underlies or transcends the formalism or discipline of choice. XiXiDu, you don't strike me as someone who would deliberately do this, so maybe you're just being a little naive - you want to think rigorously, so you're reaching for a familiar model of rigor. But the really hard questions are characterized by the fact that we don't know how to think rigorously about them - we don't have a method, ready at hand, which allows us to mechanically compute the answer. There was a time when there was no such thing as algebra, or calculus, or propositional logic. How were they invented? Look into that question, and you will be investigating how rigor and method was introduced where previously it did not exist. That is the level at which "hard problems" live.
The combination of computationalism and physicalism has become a really potent ossifier of thought, because it combines the rule-following of formalism with the empirical relevance of physics. "We know it's all atoms, so if we can reduce it to atoms we're done, and neurocomputationalism means we can focus on explaining why a question was asked, rather than on engaging with its content" - that's how this particular reductionism works.
There must be a Zen-like art to reawakening fresh perception of reality in individuals attached to particular formalisms, formulas, and abstractions, but it would require considerable skill, because you have to enter into the formalism while retaining awareness of the ontological context it supposedly represents: you have to reach the heart of the conceptual labyrinth where the reifier of abstractions is located, and then lead them out, so they can see directly again the roots in reality of their favorite constructs, and thereby also see the aspects of reality that aren't represented in the formalism, but which are just as real as those which are.
But that isn't the problem. Chalmers never asserts that you can't simulate consciousness, in the sense of making an abstract state-machine model that imitates the causal relations of consciousness with the world. The question is why it feels like something to be what we are: why is there any awareness. (There are, again, ways to evade the question here, e.g. by defining awareness behavioristically.)
History of concept of computation seems very analogous to development of concept of justification. I think we're at roughly Leibniz stage of figuring out justification. (I sorta wanna write up a thorough analysis of this somewhere.)