why is self-awareness usually discussed as something profoundly mysterious and advanced?
Because "self-awareness" is sometimes used to mean "consciousness", which is indeed mysterious (nobody knows what it is -- if they did, less would be written about the question of what it is) and advanced (nobody knows what an explanation would even look like).
And "self-awareness" is also used to mean "having any sort of model of oneself", which many simple machines have -- a fairly trivial sort of thing. If one does not notice that the same word is being used to mean two different things, one mysterious and one mundane, the resulting confusion can be mistaken for even greater profundity, mysteriousness, and advanced thinking.
why is self-awareness usually discussed as something profoundly mysterious and advanced?
There are interesting questions connected with conscious self-awareness. Specifically, whether our conscious experience is (and directs) the thought process, or whether it is a shadow that lags most actual decision making. There's an interesting experiment with split-brained patients where one half of the brain can see a glass of water and reach a hand out to it, but the other half is unaware of this and makes up a reason on the fly as to why it carried out that action.
Have you read the essay on self-awareness by V S Ramachandran ?
I'm unsure as to how my internal experience is anything other than just one more sensory experience. Although I haven't thought sufficiently carefully about it yet to have high confidence.
why is self-awareness usually discussed as something profoundly mysterious and advanced?
One factor is probably that evolution built us to believe that we are the most wonderful and precious thing ever. We also appear to be built to believe that we are our egos. The combination of these factors apparently leads to some of the issues that you mention.
That's one good definition.
The thing is that there's nothing complicated or mysterious what so ever about having a self model. If I were to write autopilot, I would include flight simulator inside, to test the autopilot's outputs and ensure that they don't kill the passenger (me) *. I could go fancy and include the autopilot in simulation itself, as to ensure that autopilot does not put airplane into situation when autopilot can't evade a collision.
Presto, a self-aware airplane, which is about as smart as a brain damaged fruit fly. It's even aware of the autopilot inside the airplane.
If I were to write the chess AI, the chess AI is recursive and it tries a move and then 'thinks': what would it do in the next situation? Using the self as a self model.
Speaking of dogs, the boston dynamics BigDog robot, from what i know, includes model of it's own physics. It is about as smart as a severely brain damaged cockroach.
So you end up with a lot of non-living things being self aware. Constantly self aware, whereas a case can be made that humans aren't constantly self aware. The non-living things that are dumber than a cockroach being self aware.
edit: one could shift goalposts and require that the animal be capable of developing a self model; well, you can teach dog to balance on a rope, and balancing on a rope pretty much requires some form of model of body's physics. You can also make a pretty stupid (dumber than cockroach) AI in a robot that would make a self model, not only of robot body but of the AI itself.
[ I never worked writing autopilots and from what I gather autopilots don't generally include this on runtime but are tested on a simulator during development. I value my survival and don't have grossly inflated view of my coding abilities, so I'd add that simulator, and make a real loud alarm to wake the pilot if anything goes wrong. An example of me using self model to improve my survival. From what I can see other programmers that I know, many in beginning have inflated view of their coding abilities which keeps biting them in the backside all day long, until they get it, perhaps becoming more self aware ].
So you end up with a lot of non-living things being self aware.
In this sense, self-awareness is easy, the question is awareness of what exactly, and how it is used.
Awareness of one's body position is less interesting, it can be only used for movement. For a biological social species awareness of one's behavior and mind probably leads to improved algorithms -- perhaps is it necessary for some kind of learning.
I am not sure what benefits would self-awareness bring to a machine... and maybe it depends on its construction and algorithm. For example when a m...
Something I find rather odd - why is self-awareness usually discussed as something profoundly mysterious and advanced?
People would generally agree that a dog can be aware of food in the bowl, if the dog has seen or smelled it, or can be unaware of a food bowl otherwise. One would think that a dog can be aware of itself in so much as dog can be aware of anything else in the world, like food in the bowl. There isn't great deal of argument about dog's awareness of food.
Yet the question whenever dog has 'self awareness' quickly turns into debate of opinions and language and shifting definitions of what 'self awareness' is, and irrelevancies such as the question whenever the dog is smart enough to figure out how mirror works well enough to identify a paint blotch on itself1 , or the requests that it be shown beyond all doubt that dog's mind is aware of dog's own mind, which is something that you can deny other humans just as successfully.
I find it rather puzzling.
My first theory is to assume that it is just a case of avoiding the thought due to it's consequences vs the status quo. The status quo is that we, without giving it much thought, decided that self awareness is uniquely human quality, and then carelessly made our morality sound more universal by saying that the self aware entities are entitled to the rights. At same time we don't care too much about other animals.
At this point, having well 'established' notions in our head - which weren't quite rationally established but just sort of happened over the time - we don't so much try to actually think or argue about self awareness as try to define the self awareness so that humans are self aware, and dogs aren't yet the definition sounds general - or try to fight such definitions - depending to our feeling towards dogs.
I think it is a case of general problem with reasoning. When there's established status quo - which has sort of evolved historically - we can have real trouble thinking about it, rather than try to make up some new definitions which sound as if they existed from the start and the status quo was justified by those definitions.
This gets problematic when we have to think about self awareness for other purposes, such as AI.
1: I don't see how the mirror self-recognition test implies anything about self awareness. You pick an animal that grooms itself, you see if that animal can groom itself using the mirror. That can work even if the animal only identifies what it wants to groom, with what it sees in the mirror, without identifying either with self (whatever that means). Or that can fail, if the animal doesn't have good enough pattern matching to match those items, even if the animal identifies what it grooms with self and has a concept of self.
Furthermore the animal that just wants to groom some object which is constantly nearby and grooming of which feels good, could, if capable of language, invent a name for this object - "foobar" - and then when making dictionary we'd not think twice about translating "foobar" as self.
edit: Also, i'd say, self recognition complicates our model of the mirrors, in the "why mirror swaps left and right rather than up and down?" way. If you look at the room in the mirror, obviously mirror swaps front and back. Clear as day. If you look at 'self' in the mirror, there's this self standing here facing you, and it's left side is swapped with it's right side. And the usual model of mirror is rotation of 180 degrees around vertical axis, not horizontal axis, followed by swapping of left and right but not up and down. You have more complicated, more confusing model of mirror, likely because you recognized the bilaterally symmetric yourself in it.