From maybe 2013 to 2016, DeepMind was at the forefront of hype around AGI. Since then, they've done less hype. For example, AlphaStar was not hyped nearly as much as I think it could have been.
I think that there's a very solid chance that this was an intentional move on the part of DeepMind: that they've been intentionally avoiding making AGI capabilities seem sexy.
In the wake of big public releases like ChatGPT and Sydney and GPT-4, I think it's worth appreciating this move on DeepMind's part. It's not a very visible move. It's easy to fail to notice. It probably hurts their own position in the arms race. I think it's a prosocial move.
If you are the sort of person who is going to do AGI capabilities research—and I recommend against it—then I'd recommend doing it at places that are more likely to be able to keep their research private, rather than letting it contribute to an arms race that I expect would kill literally everyone.
I suspect that DeepMind has not only been avoiding hype, but also avoiding publishing a variety of their research. Various other labs have also been avoiding both, and I applaud them too. And perhaps DeepMind has been out of the limelight because they focus less on large language models, and the results that they do have are harder to hype. But insofar as DeepMind was in the limelight, and did intentionally step back from it and avoid drawing tons more attention and investment to AGI capabilities (in light of how Earth is not well-positioned to deploy AGI capabilities in ways that make the world better), I think that's worth noticing and applauding.
(To be clear: I think DeepMind could do significantly better on the related axis of avoiding publishing research that advances capabilities, and for instance I was sad to see Chinchilla published. And they could do better at avoiding hype themselves, as noted in the comments. At this stage, I would recommend that DeepMind cease further capabilities research until our understanding of alignment is much further along, and my applause for the specific act of avoiding hype does not constitute a general endorsement of their operations. Nevertheless, my primary guess is that DeepMind has made at least some explicit attempts to avoid hype, and insofar as that's true, I applaud the decision.)
While I largely agree with this comment, I do want to point out that I think AlphaStar did in fact do some amount of scouting. When Oriol and Dario spoke about AlphaStar on The Pylon Show in December 2019, they showed an example of AlphaStar specifically checking for a lair and verbally spoke about other examples where it would check for a handful of other building types.
They also spoke about how it is particularly deficient at scouting, only looking for a few specific buildings, and that this causes it to significantly underperform in situations where scouting would be used by humans to get more ahead.
You said that "[w]e never saw AlphaStar do something as elementary as scouting the enemy's army composition and building the units that would best counter it." I'm not sure this is strictly true. At least according to Oriol, AlphaStar did scout for a handful of building types (though maybe not necessarily unit types) and appeared to change what it did according to the buildings it scouted.
With that said, this nitpick doesn't change the main point of your comment, which I concur with. AlphaStar did not succeed in nearly the same way that they hoped it would, and the combination of how long it took to train and the changing nature of how StarCraft gets patched meant that it would have been prohibitively expensive to try to get it trained to a level similar to what AlhpaGo had achieved.