Upon reading this, my immediate response was:
What does this have to do with the Singularity Institute's purpose? You're the Singularity Institute, not the Rationality Institute.
I can see that, if you have a team of problem solvers, having a workshop or a retreat designed to enhance their problem-solving skills makes sense. But as described, there's no indication that graduates of the Boot Camp will then go on to tackle conceptual problems of AI design or tactics for the Singularity.
What seems to be happening is that, instead of making connections to people who know about cognitive neuroscience, decision theory, and the theory of algorithms, there is a drive to increase the number of people who share a particular subjective philosophy and subjective practice of rationality - perhaps out of a belief that the discoveries needed to produce Friendly AI won't be made by people who haven't adopted this philosophy and this practice.
I find this a little ominous for several reasons:
It could be a symptom of mission creep. The mission, as I recall, was to design and code a Friendly artificial intelligence. But "produc[ing] formidable rationalists" sounds like it's meant to make the world better in a generalized way, by producing people who can shine the light of rationality into every dark corner, et cetera. Maybe someone should be doing this, but it's potentially a huge distraction from the more important task.
Also, I'm far more impressed by the specific ideas Eliezer has come up with over the years - the concept of seed AI; the concept of Friendly AI; CEV; TDT - than by his ruminations about rationality in the Sequences. They're interesting, yes. It's also interesting to hear Feynman talk about how to do science, or to read Einstein's reflections on life. But the discoveries in physics which complemented those of Einstein and Feynman weren't achieved by people who studied their intellectual biographies and sought to reproduce their subjective method; they were achieved by other people of high intelligence who also studied the physical world.
It may seem at times that the supposed professionals in the FAI-relevant fields I listed above are terminally obtuse, for having to failed to grasp their own relevance to the FAI problem, or the schema of the solution as proposed by SIAI. That, and the way that people working in AI are just sleepwalking towards the creation of superhuman intelligence without grasping that the world won't get a second chance if they get machine intelligence very right but machine values very wrong - all of that could reinforce the attitude that to have any chance of succeeding, SIAI needs to have a group of people who share a subjective methodology, and not just domain expertise.
However, I think we are rapidly approaching a point where a significant number of people are going to understand that the "intelligence explosion" will above all be about the utility function dominating that event. There have been discussions about how a proto-friendly AI might try to infer the human utility-function schema, how to do so without creating large numbers of simulated persons who might be subjected to cognitive vivisection, and so forth. But I suspect that will never happen, at least not in this brute-force fashion, in which whole adult brains might be scanned, simulated, modified and so on, for the purpose of reverse-engineering the human decision architecture.
My expectation is that the presently small fields of machine ethics and neuroscience of morality will grow rapidly and will come into contact, and there will be a distributed research subculture which is consciously focused on determining the optimal AI value system in the light of biological human nature. In other words, there will be human minds trying to answer this question long before anyone has the capacity to direct an AI to solve it. We should expect that before we reach the point of a Singularity, there will be a body of educated public opinion regarding what the ultimate utility function or decision method (for a transhuman AI) should be, deriving from work in those fields which ought to be FAI-relevant but which have yet to engage with the problem. In other words, they will be collectively engaging with the problem before anyone gets to outsource the necessary research to AIs.
The conclusion I draw from this for the present is that there needs to be more preparation for this future circumstance, and less attempt to spread a set of methods intended just to facilitate generalized rationality. People who want to see Friendly AI created need to be ready to talk with researchers in those other fields, who never attended "Rationality Boot Camp" but who will nonetheless be independently coming to the threshold of thinking about the FAI problem (perhaps under a different name) and developing solutions to it. When the time comes, there will be a phase transition in academia and R&D, from ignoring the problem to wanting to work on it. The creation of ethical artificial minds is not going to be the work of one startup or one secret military project, working in isolation from mainstream intellectual culture; nor is it a mirage that will hang on the horizon of the future forever. It will happen because of that phase transition, and tens of thousands of people will be working on it, in one way or another. That doesn't mean they all get to be relevant or right, but there will be a pre-Singularity ferment that develops very quickly, and in which certain specific understandings of the people who did labor in isolation on this problem for many years will be surpassed and superseded. People will have ingrained assumptions about the answer to subproblem X or subproblem Y - assumptions to which one will have grown accustomed due to the years of isolation spent trying to solve all subproblems at once - and one must be ready for these answer-schemas to be junked when the time finally arrives that the true experts in that area deign to turn their attention to the subproblem in question.
One other observation about "lessons in rationality". Luke recently posted about LW's philosophy as being just a form of "naturalism" (i.e. materialism), a view that has already been well-developed by mainstream philosophy, but it was countered that these philosophers have few results to show for their efforts, even if they get the basics right. I think the crucial question, regarding both LW's originality and its efficacy, concerns method. It has been demonstrated that there is this other intellectual culture, the naturalistic sector of analytic philosophy, which shares a lot of the basic LW worldview. But are there people "producing results" (or perhaps just arriving at opinions) in a way comparable to the way that opinions are being produced here? For example, Will Sawin suggested that LW's epistemic method consists of first imagining how a perfectly rational being would think about a problem. As a method of rationality, this is still very "subjective" and "intuitive" - it's not as if you're plugging numbers into a Bayesian formula and computing the answer, which remains the idealized standard of rationality here.
So, if someone wants to do some comparative scholarship regarding methods of rationality that already exist out there, an important thing to recognize is that LW's method or practice, whatever it is, is a subjective method. I don't call it subjective in order to be derogatory, but just to point out that it is a method intended to be used by conscious beings, whose practice has to involve conscious awareness, whether through real-time reflection or after-the-fact analysis of behavior and results. The LW method is not an algorithm or a computation in the normal sense, though these non-subjective epistemological ideas obviously play a normative and inspirational role for LW humans trying to "refine their rationality". So if there is "prior art", if LW's methods have been anticipated or even surpassed somewhere, it's going to be in some tradition, discipline, or activity where the analysis of subjectivity is fairly advanced, and not just one where some calculus of objectivities, like probability theory or computer science, has been raised to a high art.
For that matter, the art of getting the best performance out of the human brain won't just involve analysis; not even analysis of subjectivity is the whole story. The brain spontaneously synthesizes and creates, and one also needs to identify the conditions under which it does so most fluently and effectively.
I couldn't have put it better myself.
I do understand the SIAI's explanations for their rationality work.
But in the end, I quite agree with Mitchell's points.
I can't find the cite, but I vaguely recall someone from SIAI saying that LessWrong and the rationality stuff was by far the most effective recruitment method they've ever used.