I may have missed the part where you explained why qualia can't fit into a state machine-model of the universe.
I think Mitchell's point is that we don't know whether state-machines have qualia, and the costs of making assumptions could be large.
The topic of this article is how rational agents should solve a particular tragedy of the commons. Certainly, a common moral code is one solution to this problem: an army will have no deserters if each soldier morally refuses to desert. I don't want to put words in your mouth, but you seem to think that common morality is the best, or perhaps only solution.
I think Eliezer is more interested in situations where this solution is impractical. Perhaps the rationalists are a society composed of people with vastly differing moral codes, but even in this case, they should still be capable of agreeing to coordinate, even if that means giving up things that they individually value.
One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly (so long as it's not impeding anyone else's right to do the same)
How do you feel about desertion?
This comment was sitting at -2 when I saw it, which makes me think that maybe I don't understand Eliezer's point. I thought the OP was making the point that when we talk about something being "attractive" or "moral" or maybe even "sapient", we project facts about our minds into the real world. "Attractive" really means "attractive to humans", and if we forget this fact, we can end up inadvertently drawing wrong conclusions about the world. If that's wrong, then what was this post actually about?
Remember, the Kolmogorov complexity depends on your "universal Turing machine", so we should expect to only get estimates. Mitchell makes an estimate of ~50000 bits for the new minimal standard model. I'm not an expert on physics, but the mathematics required to explain what a Lagrangian is would seem to require much more than that. I think you would need Peano arithmetic and a lot of set theory just to construct the real numbers so that you could do calculus (of course people were doing calculus for over one hundred years before real numbers existed, but I have a hard time imagining a rigorous calculus without them...) I admit that 50000 bits is a lot of data, but I'm sceptical that it could rigorously code all that mathematics.
F=ma has the same problem, of course. Does the right hand side really make sense without calculus?
ETA: If you want a fleshed out example, I think a much better problem to start off with would be predicting the digits of pi, or the prime numbers.
I think consciousness is relevant here because it may be an important component of our preferences. For instance, all else being equal, I would prefer a universe filled with conscious beings to one filled with paper clips. If an AI cannot figure out what consciousness is, then it could have a hard time enacting human preferences.