When you self-distill a model (e.g. train a new model using predictions from your old model), the resulting model represents a less complex function. After many rounds of self-distillation, you essentially end up with a constant function. This paper makes the above more precise.
Anyway, if you apply multiple rounds of self-distillation to a model, it becomes less complex. So if the original model learned complex, power-seeking behaviors that doesn't help it do well on the training data, this behavior would likely go away after several rounds of self-distillation. Self-distillation allows you to essentially get the minimum complexity model that still does well on the test set. Thus, I think it's promising from an AI safety standpoint.
The problem with power-seeking behavior is that it helps to do well in quite broad range of tasks.
As of right now, I don't think that LLMs are trained to be power seeking and deceptive.
Power-seeking is likely if the model is directly maximizing rewards, but LLMs are not quite doing this.