The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of 'not bothering to think about it any more'.
The kind of model which postulates that "a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human" would not likely stop at "... at least be structured like that human for, like, 9 orders of magnitude down from a human's size, to the level that you a human can see through an electron microscope, that's enough after that it doesn't matter (much / at all)". Wouldn't that be kind of arbitrary and make for an ugly model?
Instead, if structural correspondence allowed for significant additional confidence that the em's professions of being conscious were true, wouldn't such a model just not stop, demanding "turtles all the way down"?
I guess I'm not sure what some structural fidelity can contribute (and find those models too construed which place consciousness somewhere beyond functional equivalence, but still in the upper echelons of the substructures, conveniently not too far from the surface level), compared to "just" overall functional equivalence.
IOW, the big (viable) alternative to functional equivalence, which is structural (includes functional) equivalence, would likely not stop just a few levels down.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm kind of confused. Did we really mean odds or primes? If we told the robot that this statement was true for the N integers, shouldn't we have said it correctly? If we did mean primes, then could at least have been honest, and said '2, 3, 5, 7'.