Logos01, we seem to be using different definitions of "rational behavior". So far I can't tell if this stems from a political dispute, or a factual disagreement, or just an argument that took on a life of its own.
We are, and I initially noted this when I parsed "rational behavior" from "instrumentally rational behavior".
Please try to state your first claim (from the great-grandparent comment) without using the word "rational" or "reason" or any synonym thereof.
This is a request that is definitionally impossible, since the topic at hand was "what is rational behavior".
For my own position: if you choose not to do what works or "wins", then complaining about how you conformed to the right principles will accomplish nothing (except in cases where complaining does help). It will not change the outcome, nor will it increase your knowledge.
No contest.
In my case, what I was getting at was the notion that it is possible to present a counterfactual scenario where doing what "loses" is rationally necessary. For this to occur it would be necessary for the processes of rationality available to Louis-the-Loser to have a set of goals, and come to the conclusion that it is necessary to violate all of them.
Let's assume that Louis-the-Loser has a supergoal of giving all humans more happiness over time. Over time, Louis comes to the factual conclusion (with total certainty) that humans are dead set on reducing their happiness to negative levels -- permanently; and that further they have the capacity to do so. Perhaps you now think that Louis's sole remaining rational decision is to fail his supergoal: to kill all humans. And this would be maximal happiness for humans since he'd prevent negative happiness, yes? (And therefore, "win")
But there's a second answer to this question. And it is equally rationally viable. Give the humans what they want. Allow them to constantly and continuously maximize their unhappiness; perhaps even facilitate them in that endeavor. Now, why is this a reasonable thing to do? Because even total certainty can be wrong, and even P=1 statements can be revised.
However, it does require Louis to actually lose.
To focus on one point that seems straightforward:
even P=1 statements can be revised.
My first response was, no they can't. I'll change that to, "how, exactly?"
LessWrongers as a group are often accused of talking about rationality without putting it into practice (for an elaborated discussion of this see Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality). This behavior is particularly insidious because it is self-reinforcing: it will attract more armchair rationalists to LessWrong who will in turn reinforce the trend in an affective death spiral until LessWrong is a community of utilitarian apologists akin to the internet communities of anorexics who congratulate each other on their weight loss. It will be a community where instead of discussing practical ways to "overcome bias" (the original intent of the sequences) we discuss arcane decision theories, who gets to be in our CEV, and the most rational birthday presents (sound familiar?).
A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": accomplishing a set of practical well-defined goals to increment your rationalist "level". It's hard to see how these goals fit into a long-term plan to achieve anything besides self-improvement for its own sake. Indeed, the article begins by priming us with a renaissance-man inspired quote and stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"
So what's the solution? I don't know. However I can tell you a few things about the solution, whatever it may be:
Whatever you may decide to do, be sure it follows these principles. If none of your plans align with these guidelines then construct a new one, on the spot, immediately. Just do something: every moment you sit hundreds of thousands are dying and billions are suffering. Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.
I declare Crocker's rules on the writing style of this post.