One other point of reluctance occurs to me: there are conditions under which imagining yourself to be a superstar, while still bad from a selfish viewpoint, might be good for society as a whole: when you're considering becoming a scientist or inventor. Finding a working tungsten light-bulb filament was more than worth wasting hundreds or thousands of failed filaments in Edison's experiments, both from society's point of view and from Edison's... but what if you look at harder scientific problems, for which each world-changing breakthrough might cost hundreds or thousands of less-successful scientists who would have been happier and wealthier in finance or law or medicine or software or...? Maybe it's a good thing that lots of smart kids imagine being the next Einstein, then pick a career which is likely to be suboptimal in terms of personal utility but optimal in terms of global utility.
On the gripping hand, maybe the world would be better in the long run if science was seen as inglorious, (relatively) impoverishing, low status... but very altruistic. "Less science" might be a tolerable price to pay for "less science in the wrong hands".
Recently there has been a couple of articles in the discussion page asking whether rationalists should do action A. Now such questions are not uninteresting, but by saying "rationalist" they are poorly phrased.
The rational decision at any time is the decision, given a human with a specific utility function B, and information C, should make to maximise B, given their knowledge (and knowledge about their knowledge) of C. It's not a decision a rationalist should make, it's a decision any human should make. If Omega popped into existence and carefully explained why action A is the best thing for this human to do given their function B, and their information C, then said human should agree.
The important question is not what a rationalist should do, but what your utility function and current information is. This is a more difficult question. Humans are often wrong about what they want in the long term, and it's questionable how much we should value happiness now over happiness in the future (in particular, I suspect current and future me might disagree on this point). Quantifying our current information is also rather hard- we are going to make bad probability estimates, if we can make them at all, which lead us into incorrect decisions just because we haven't considered the evidence carefully enough.
Why is this an important semantic difference? Well it's important for the cause of refining rationality that we don't get caught with associating the notion of rationality with certain goals. Some rationalists believe that they want to save the world, and the best way to do it is by creating friendly AI. This is because they have certain utility functions, and certain beliefs about the probabilities of the singularity. Not all rationalists have these utility functions. Some just want to have a happy home life, meet someone nice, and raise a family. These are different goals, and they can be helped by rationality, because rationality IS the art of winning. Being able to clearly state ones goals and work out the best way to acheieve them is useful pretty much no matter what those goals are. (pretty much to prevent silly examples here!)