Something I find rather odd - why is self-awareness usually discussed as something profoundly mysterious and advanced?
People would generally agree that a dog can be aware of food in the bowl, if the dog has seen or smelled it, or can be unaware of a food bowl otherwise. One would think that a dog can be aware of itself in so much as dog can be aware of anything else in the world, like food in the bowl. There isn't great deal of argument about dog's awareness of food.
Yet the question whenever dog has 'self awareness' quickly turns into debate of opinions and language and shifting definitions of what 'self awareness' is, and irrelevancies such as the question whenever the dog is smart enough to figure out how mirror works well enough to identify a paint blotch on itself1 , or the requests that it be shown beyond all doubt that dog's mind is aware of dog's own mind, which is something that you can deny other humans just as successfully.
I find it rather puzzling.
My first theory is to assume that it is just a case of avoiding the thought due to it's consequences vs the status quo. The status quo is that we, without giving it much thought, decided that self awareness is uniquely human quality, and then carelessly made our morality sound more universal by saying that the self aware entities are entitled to the rights. At same time we don't care too much about other animals.
At this point, having well 'established' notions in our head - which weren't quite rationally established but just sort of happened over the time - we don't so much try to actually think or argue about self awareness as try to define the self awareness so that humans are self aware, and dogs aren't yet the definition sounds general - or try to fight such definitions - depending to our feeling towards dogs.
I think it is a case of general problem with reasoning. When there's established status quo - which has sort of evolved historically - we can have real trouble thinking about it, rather than try to make up some new definitions which sound as if they existed from the start and the status quo was justified by those definitions.
This gets problematic when we have to think about self awareness for other purposes, such as AI.
1: I don't see how the mirror self-recognition test implies anything about self awareness. You pick an animal that grooms itself, you see if that animal can groom itself using the mirror. That can work even if the animal only identifies what it wants to groom, with what it sees in the mirror, without identifying either with self (whatever that means). Or that can fail, if the animal doesn't have good enough pattern matching to match those items, even if the animal identifies what it grooms with self and has a concept of self.
Furthermore the animal that just wants to groom some object which is constantly nearby and grooming of which feels good, could, if capable of language, invent a name for this object - "foobar" - and then when making dictionary we'd not think twice about translating "foobar" as self.
edit: Also, i'd say, self recognition complicates our model of the mirrors, in the "why mirror swaps left and right rather than up and down?" way. If you look at the room in the mirror, obviously mirror swaps front and back. Clear as day. If you look at 'self' in the mirror, there's this self standing here facing you, and it's left side is swapped with it's right side. And the usual model of mirror is rotation of 180 degrees around vertical axis, not horizontal axis, followed by swapping of left and right but not up and down. You have more complicated, more confusing model of mirror, likely because you recognized the bilaterally symmetric yourself in it.
I'm hesitant to call those models of any kind; they don't include any kind of abstraction, either of the program's internal state or of inferred enemy state. It's just running the same algorithm on different initial conditions; granted, this is muddled a little because classical chess AI doesn't have much internal state to speak of, just the state of the board and a tree of possible moves from there. Two copies of the same chess algorithm running against each other might be said to have a (uniquely perfect) model of their enemies, but that's more or less accidental.
I'd have to disagree about humans not doing other-modeling, though. As best I can tell we evaluate our actions relative to others primarily based on how we believe those actions affect their disposition toward us, and then infer people's actions and their effects on us from there. Few people take it much farther than that, but two or sometimes three levels of recursion is more than enough for this sort of modeling to be meaningful.
Actually they don't have perfect models, the model does fewer moves ahead.
With regards to what people are doing, i mean, we don't play chess like this. Yes, we model other people's state, but quite badly. The people who overthink it fail horribly at social interaction.
With chess, you could blank out the lines 1 to 3 and 6 to 8 for first 10 moves, or the like, then you got some private states for AIs to model edit: or implement fog of war, pieces only see what they are attacking. Doesn't make any fundamental differences here. Except now there's some privat... (read more)