MatthewW comments on The conscious tape - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (113)
This question arises when I consider the moral status of intelligent agents. If I encounter a morally-significant dormant Turing machine with no input devices, do I need to turn it on?
If yes, notice that state N of the machine can be encoded as the initial state of the machine plus the number N. Would it suffice to just start incrementing a counter and say that the machine is running?
If I do not need to turn anything on, I might as well destroy the machine, because the Turing machine will still exist in a Platonic sense, and the Platonic machine won't notice if I destroy a manifestation of it.
David Allen notes that consciousness ought to be defined relative to a context in which it can be interpreted; somewhat similarly, Jacob Cannell believes that consciousness needs some environment in order to be well-defined.
I think the answer to my moral question is that the rights of an intelligent agent can't be meaningfully decomposed into a right to exist and a right to interact with the world.
It seems to me that the arguments so lucidly presented elsewhere on Less Wrong would say that the machine is conscious whether or not it is run, and indeed whether or not it is built in the first place: if the Turing machine outputs a philosophical paper on the question of consciousness of the same kind that human philosophers write, we're supposed to take it as conscious.
It is useful to distinguish the properties "a subsystem C of X is conscious in X" and "C exists in a conscious way" (which means that additionally X=reality). I think Nisan expresses that idea in the parent comment.