The following excerpt from a recent post, Recursively Self-Improving Human Intelligence, suggests to me that it is time for a reminder of the reason LW was started.
"[C]an anyone think of specific ways in which we can improve ourselves via iterative cycles? Is there a limit to how far we can currently improve our abilities by improving our abilities to improve our abilities? Or are these not the right questions; the concept a mere semantic illusion[?]"
These are not the right questions -- not because the concept is a semantic illusion, but rather because the questions are a little too selfish. I hope the author of the above words does not mind my saying that. It is the hope of the people who started this site (and my hope) that the readers of LW will eventually turn from the desire to improve their selves to the desire to improve the world. How the world (i.e., human civilization) can recursively self-improve has been extensively discussed on LW.
Eliezer started devoting a significant portion of his time and energy to non-selfish pursuits when he was still a teenager, and in the 12 years since then, he has definitely spent more of his time and energy improving the world than improving his self (where "self" is defined to include his income, status, access to important people and other elements of his situation). About 3 years ago, when she was 28 or 29, Anna Salamon started spending most of her waking hours trying to improve the world. Both will almost certainly devote the majority of rest of their lives to altruistic goals.
Self-improvement cannot be ignored or neglected even by pure altruists because the vast majority of people are not rational enough to cooperate with an Eliezer or an Anna without just slowing them down and the vast majority are not rational enough to avoid catastrophic mistakes were they to try without supervision to wield the most potent methods for improving the world. In other words, self-improvement cannot be ignored because now that we have modern science and technology, it takes more rationality than most people have just to be able to tell good from evil where "good" is defined as the actions that actually improve the world.
One of the main reasons Eliezer started LW is to increase the rationality of altruists and of people who will become altruists. In other words, of people committed to improving the world. (The other main reason is recruitment for Eliezer's altruistic FAI project and altruistic organization). If the only people whose rationality they could hope to increase through LW were completely selfish, Eliezer and Anna would probably have put a lot less time and energy into posting rationality clues on LW and a lot more into other altruistic plans.
Most altruists who are sufficiently strategic about their altruism come to believe that improving the effectiveness of other altruists is an extremely potent way to improve the world. Anna for example spends vastly more of her time and energy improving the rationality of other altruists than she spends improving her own rationality because that is the allotment of her resources that maximizes her altruistic goal of improving the world. Even the staff of the Singularity Institute who do not have Anna's teaching and helping skills and who consequently specialize in math, science and computers spend a significant fraction of their resources trying to improve the rationality of other altruists.
In summary, although no one (that I know of) is opposed to self-improvement's being the focus of most of the posts on LW and no one is opposed to non-altruists' using the site for self-improvement, this site was founded in the hope of increasing the rationality of altruists.
That leads me to believe I have been insufficiently transparent about my motivations for writing, so let me try to rectify that insufficiency:
This site (and OB before it and the SL4 mailing list) has always been an congenial place for altruists, and I wanted to preserve that quality.
Now that I have the comments, it occurs to me that my post was probably too heavy-handed in how it went about that goal, but I still think that an effective way to achieve that goal is to write a post that altruists will like and non-altruists will find pointless or even slightly off-putting. if too high a fraction of the recent posts on this site have nothing interesting to say to altruists, that is a problem because most readers will not read a lot of the content written in previous years because that is the way the web is. (the post I was replying to starts by wondering whether LW should have more posts about recursive human self-improvement, and recursive human self-improvement short of whole brain emulation or similar long-range plans is not a potent enough means to altruistic end to be interesting to altruists.) But my post was too heavy-handed in that it was a reply to a post not of interest to altruists rather than being a post that stands on its own and is of interest to altruists and not of interests to non-altruists.
Another motivation I had was to persuade people to become more altruistic, but now that I see it written out like that, it occurs to me that probably the only way to do that effectively on LW is to set a positive personal example. also, it occurs to me that my post did engage in some exhortation or even cheer-leading, and exhortation and cheer-leading are probably ineffective.
I'm not sure that I can answer effectively without stating how altruistic I consider myself to be. I feel that I am a semi-altruist -- I assign higher utility to the welfare and happiness people I am personally attached to and myself, but, by default, I assign positive utility to the welfare and happiness of any sentient being.
I found your post offputting because it looked like a covert argument for altruism of the following form: