the problem with state-machine materialism is not that it models the world in terms of causal interactions between things-with-states; the problem is that it can't go any deeper than that, yet apparently we can.
I may have missed the part where you explained why qualia can't fit into a state machine-model of the universe. Where does the incompatibility come from? I'm aware that it looks like no human-designed mathematical objects have experienced qualia yet, which is some level of evidence for it being impossible, but not so strong that I think you're justified in saying a materialist/mathematical platonist view of reality can never account for conscious experiences.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I might be mistaken, but it seems like you're forwarding a theory of consciousness, as opposed to a theory of intelligence.
Two issues with that - first, that's not necessarily the goal of AI research. Second, you're evaluating consciousness, or possibly intelligence, from the inside, rather than the outside.
I think consciousness is relevant here because it may be an important component of our preferences. For instance, all else being equal, I would prefer a universe filled with conscious beings to one filled with paper clips. If an AI cannot figure out what consciousness is, then it could have a hard time enacting human preferences.