Apparently a PhD candidate at the Social Robotics Lab at Yale created a self-aware robot:
In the mirror test, developed by Gordon Gallup in 1970, a mirror is placed in an animal’s enclosure, allowing the animal to acclimatize to it. At first, the animal will behave socially with the mirror, assuming its reflection to be another animal, but eventually most animals recognize the image to be their own reflections. After this, researchers remove the mirror, sedate the animal and place an ink dot on its frontal region, and then replace the mirror. If the animal inspects the ink dot on itself, it is said to have self-awareness, because it recognized the change in its physical appearance.
[...]
To adapt the traditional mirror test to a robot subject, computer science Ph.D. candidate Justin Hart said he would run a program that would have Nico, a robot that looks less like R2D2 and more like a jumble of wires with eyes and a smile, learn a three-dimensional model of its body and coloring. He would then change an aspect of the robot’s physical appearance and have Nico, by looking at a reflective surface, “identify where [his body] is different.”
What do Less Wrongians think? Is this "cheating" traditional concepts of self-awareness, or is self-awareness "self-awareness" regardless of the path taken to get there?
I don't at all think that this robot possesses full-blown human-style self-awareness, but, depending on the actual algorithms used in self-recognition, I think it passes the mirror test in a meaningful way.
For instance, if it learned to recognize itself in the mirror by moving around and noticing really strong correlations between it's model of it's own movements and the image it sees, then ultimately concludes that the image is a reflection of itself, then identifies visual changes about itself, then I would say that it has a self-model in a meaningful and important way. It doesn't contextualize itself in a social setting, or model itself as having emotions, but it is self-representing.
This robot has a lower amount of self-modeling capability, I would say: http://www.youtube.com/watch?v=ehno85yI-sA
It's able to update a model of itself, but not recognize itself.
Towards the end of the video, I feel a similar amount to greater amount of sympathy for the robot's "struggle" as I do for injured crustaceans. I also endorse that level of sympathy, and can't really think of a meaningful functional difference between the two that puts the crustacean intelligence as being more important.
If it just took for granted that it was looking at itself, and updated a model of itself, it has some kind of self-model, but it seems less important.
That's a really good point. Your comment and thomblake's comment do a pretty good job of dismantling my remark.