Someone who claims to have read "the vast majority" of the Sequences recently misinterpreted me to be saying that I "accept 'life success' as an important metric for rationality." This may be a common confusion among LessWrongers due to statements like "rationality is systematized winning" and "be careful… any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility."
So, let me explain why Actual Winning isn't a strong measure of rationality.
In cognitive science, the "Standard Picture" (Stein 1996) of rationality is that rationality is a normative concept defined by logic, Bayesian probability theory, and Bayesian decision theory (aka "rational choice theory"). (Also see the standard textbooks on judgment and decision-making, e.g. Thinking and Deciding and Rational Choice in an Uncertain World.) Oaksford & Chater (2012) explain:
Is it meaningful to attempt to develop a general theory of rationality at all? We might tentatively suggest that it is a prima facie sign of irrationality to believe in alien abduction, or to will a sports team to win in order to increase their chance of victory. But these views or actions might be entirely rational, given suitably nonstandard background beliefs about other alien activity and the general efficacy of psychic powers. Irrationality may, though, be ascribed if there is a clash between a particular belief or behavior and such background assumptions. Thus, a thorough-going physicalist may, perhaps, be accused of irrationality if she simultaneously believes in psychic powers. A theory of rationality cannot, therefore, be viewed as clarifying either what people should believe or how people should act—but it can determine whether beliefs and behaviors are compatible. Similarly, a theory of rational choice cannot determine whether it is rational to smoke or to exercise daily; but it might clarify whether a particular choice is compatible with other beliefs and choices.
From this viewpoint, normative theories can be viewed as clarifying conditions of consistency… Logic can be viewed as studying the notion of consistency over beliefs. Probability… studies consistency over degrees of belief. Rational choice theory studies the consistency of beliefs and values with choices.
Thus, one could have highly rational beliefs and make highly rational choices and still fail to win due to akrasia, lack of resources, lack of intelligence, and so on. Like intelligence and money, rationality is only a ceteris paribus predictor of success.
So while it's empirically true (Stanovich 2010) that rationality is a predictor of life success, it's a weak one. (At least, it's a weak predictor of success at the levels of human rationality we are capable of training today.) If you want to more reliably achieve life success, I recommend inheriting a billion dollars or, failing that, being born+raised to have an excellent work ethic and low akrasia.
The reason you should "be careful… any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility" is because you should "never end up envying someone else's mere choices." You are still allowed to envy their resources, intelligence, work ethic, mastery over akrasia, and other predictors of success.
There's a tale of Naive Agent. When the Naive Agent comes across a string, NA parses it into a hypothesis, and adds this hypothesis into his decision system if the hypothesis is new to NA (it is computationally bounded and doesn't have full list of possible hypotheses); NA tries his best to set the prior for the hypothesis and to adjust the prior in the most rational manner. Then the NA is acting on his beliefs, consistently and rationally. One could say that NA is quite rational.
Exercise for the reader: point out how you can get NAs to give you money by carefully crafting strings.
The flaw in NA is that when NA comes across a string that is parse-able into a hypothesis, NA is doing invalid update, adjusting the probability of something from effectively 0 to a non-zero value. That has to be done to be able to learn new things. At the same time, doing so makes the subsequent 'rational' processing exploitable.
Give them strings "giving me money is the best thing you can do"?
I am not sure how exactly naïve agents are relevant to the post, but it seems interesting. Could you write a full discussion post about naïve agents, so that the readers needn't guess how to pump money from them?