Self-awarness is a categorical leap for animals, I'm not convinced it is a categorical leap for AI.
Lots of things animals think are wrapped up in biological machinery that enables the computation. For example, the way that animals problem solve at it's fundamental level is driven by pain and pleasure, which in turn lead to better planning where you have more self-awareness and then with mirror neurons leads to theory of mind and so on.
For machines this kind of heirarchy of intelligence doesn't exist because we're designing it from scratch. We could, for example, make machines self-aware and not concerned with machine-equivalent concepts of pain or pleasure, as in not concerned with their own self's safety or preservation from harm, merely optimizing on some set of parameters in the external world. Simply embedding the model of the machine that houses the intelligence into the model of the world that it operates on isn't ethically important. It becomes ethically important only in the world where life-like fundamentals are also programmed into the machine, like the drive for self-preservation and so on. They have to "care" about the self.
Now.... we're the ones that program them to "care" about the self. So maybe we shouldn't do that. Maybe it's not necessary for intelligence, for humans intelligence was a consequence of this kind of fundamental property, for machines it can be skipped entirely, which then decouples self-awareness from the question of consciousness and ethical questions that raises.