There are people here who are working on preventing UAI-- I'm not sure they're right (I have my doubts about provable Friendliness), but it's definitely part of the history of and purpose for the the site.
While Yudkowsky is hardly the only person to work on practical self-improvement, it amazes me that it took a long-range threat to get people to work seriously on the sunk-cost fallacy and such-- and to work seriously on teaching how to notice biases and give them up.
Most people aren't interested in existential risk, but some of the people who are interested in the site obviously are.
Granted, but is it a core aspect of the site? Is it something your users need to know, to know what Less Wrong is about?
Beyond that, does it signal the right things about Less Wrong? (What kinds of groups are worried about existential threats? Would you consider worrying about existential threats, in the general case rather than this specific case, to be a sign of a healthy or unhealthy community?)
I told an intelligent, well-educated friend about Less Wrong, so she googled, and got "Less Wrong is an online community for people who want to apply the discovery of biases like the conjunction fallacy, the affect heuristic, and scope insensitivity in order to fix their own thinking." and gave up immediately because she'd never heard of the biases.
While hers might not be the best possible attitude, I can't see that we win anything by driving people away with obscure language.
Possible improved introduction: "Less Wrong is a community for people who would like to think more clearly in order to improve their own and other people's lives, and to make major disasters less likely."