toto comments on Open Thread June 2010, Part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (534)
I have problems with the "Giant look-up table" post.
If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human's behaviour is dependent not just on the present state of the environment, but also on previous states. I don't see how you can successfully emulate a human without that. So the GLUT's entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.
Note that "creation of beliefs" (including about beliefs) is just a special case of memory. It's all about input/state at time t1 influencing (restricting) the set of entries in the table that can be looked up at time t2>t1. If a GLUT doesn't have this ability, it can't emulate a human. If it does, then it can meet all the requirements spelt out by Eliezer in the above passage.
So I don't see how the non-consciousness of the GLUT is established by this argument.
But the difficulty is precisely to explain why the GLUT would be different from just about any possible human-created AI in this respect. Keeping in mind the above, of course.
Memmory is input to. The "GLUT" is just fed all of the things its seen so far back in as input along with the current state of its external enviroment. A copy is made and then added to the rest of the memmory and the next cycle its fed in again with the next new state.
This is basically just the Chinese room argument. There is a room in China. Someone slips a few symbols underneath the door every so often. The symbols are given to a computer with artificial intelligence which then makes an appropriate response and slips it back through the door. Does the computer actually understand Chinese? Well what if a human did exactly the same process the computer did, manually? However, the operator only speaks English. No matter how long he does it he will never truly understand Chinese - even if he memorizes the entire process and does it in his head. So how could the computer "understand"?