Panpsychism seems like a plausible theory of consciousness. It raises extreme challenges for establishing reasonable ethical criteria.
It seems to suggest that our ethics is very subjective: the "expanding circle" of Peter Singer would eventually (ideally) stretch to encompass all matter. But how are we to communicate with, e.g. rocks? Our ability to communicate with one another and our presumed ability to detect falsehood and empathize in a meaningful way allow us to ignore this challenge wrt other people.
One way to argue that this is not such a problem is to suggest that humans are simply very limited in our capacity as ethical beings, and that we are fundamentally limited in our perceptions of ethical truth to only be able to draw conclusions with any meaningful degree of certainty about other humans or animals (or maybe even life-forms, if you are optimistic).
But this is not very satisfying if we consider transhumanism. Are we to rely on AI to extrapolate our intuitions to the rest of matter? How do we know that our intuitions are correct (or do we even care? I do, personally...)? How can we tell if an AI is correctly extrapolating?
If some things are computers and some aren't, then there has to be some rule differentiating the two, which gives a complexity penalty. Thus by occam's razor everything is likely a computer. Does this argument work?
My position: consciousness is in the map, not the territory.
Maybe, because humans often lie or have inaccurate self-knowledge.
Well, you can turn some unusual things into computers - billiard tables, model trains and the game of life are all turning-complete.
But I do have a good understanding of what a computer is. Perhaps my occam's razor based prior is that everything is a computer, but then I observe that I can't use most objects to compute anything, so I update... (read more)