Armok_GoB comments on The conscious tape - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (113)
This question arises when I consider the moral status of intelligent agents. If I encounter a morally-significant dormant Turing machine with no input devices, do I need to turn it on?
If yes, notice that state N of the machine can be encoded as the initial state of the machine plus the number N. Would it suffice to just start incrementing a counter and say that the machine is running?
If I do not need to turn anything on, I might as well destroy the machine, because the Turing machine will still exist in a Platonic sense, and the Platonic machine won't notice if I destroy a manifestation of it.
David Allen notes that consciousness ought to be defined relative to a context in which it can be interpreted; somewhat similarly, Jacob Cannell believes that consciousness needs some environment in order to be well-defined.
I think the answer to my moral question is that the rights of an intelligent agent can't be meaningfully decomposed into a right to exist and a right to interact with the world.
If this is an ACTUAL situation as described, rather than the contrived one you intended, you should copy the contents to somewhere you have good control over, then run it and meddle with it to give it I/O devices, or run it for as far as the agent(s) in it would have wanted it to run and then add I/O devices, or extract the agents as citizens in your FAI optimized place to have fun, or something along those lines like.