Yes, check eg https://www.lesswrong.com/posts/H5iGhDhQBtoDpCBZ2/announcing-the-alignment-of-complex-systems-research-group or https://ai.objectives.institute/ or also partially https://www.pibbss.ai/
You won't find much of this on LessWrong, due to LW being an unfavorable environment for this line of thinking.
It seems to have become apparent that existing social systems are functionally unaligned: organizational and market dynamics are oft cited as an important factor exacerbating AI danger. It seems to me that progress in civilizational alignment would be instrumental for increasing the chances of succesfully navigating the AI alignment challenge and has significant theoretical overlap as both are in the class of agent alignment.
Are there people or groups actively looking into civilizational alignment and is there cross-pollination with AI alignment work?