cata comments on The Importance of Self-Doubt - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (726)
This post suffers from lumping together orthogonal issues and conclusions from them. Let's consider individually the following claims:
A priori, from (8) we can conclude (9). But assuming the a priori improbable (7), (8) is a rational thing for X to conclude, and (9) doesn't automatically follow. So, at this level of analysis, in deciding whether X is overconfident, we must necessarily evaluate (7). In most cases, (7) is obviously implausible, but the post itself suggests one pattern for recognizing when it isn't:
Thus, "doing something which very visibly and decisively alters the fate of humanity" is the kind of evidence that allows to conclude (7). But unfortunately there is no royal road to epistemic rationality, we can't require this particular argument that (7) in all cases. Sometimes the argument has an incompatible form.
In our case, the shape of the argument that (7) is as follows. Assuming (2), from (3) and (4) it follows that (5), and from (1), (5) and (6) we conclude (7). Note that the only claim about a person is (4), that their work contributes to development of FAI. All the other claims are about the world, not about the person.
Given the structure of this argument for the abhorrent (8), something being wrong with the person can only affect the truth of (4), and not of the other claims. In particular, the person is overconfident if person X's work doesn't in fact contribute to FAI (assuming it's possible to contribute to FAI).
Now, the extent of overconfidence in evaluating (4) is not related to the weight of importance conveyed by the object level conclusions (1), (2) and (3). One can be underconfident about (4) and still (8) will follow. In fact, (8) is rather insensitive to the strength of assertion (4): even if you contribute to FAI a little bit, but the other object level claims hold, your work is still very important.
Finally, my impression is that Eliezer is indeed overconfident about his ability to technically contribute to FAI (4), but not to the extent this post suggests, since as I said the strength of claim (8) has nothing to do with the level of overconfidence in (4), and even small contribution to FAI is enough to conclude (8) given other object level assumptions. Indeed, Eliezer never claims that success is assured:
On the other hand, only few people are currently in the position to claim (4) to any extent. One needs to (a) understand the problem statement, (b) be talented enough, and (c) take the problem seriously enough to direct serious effort at it.
My ulterior motive to elaborating this argument is to make the situation a little bit clearer to myself, since I claim the same role, just to a smaller extent. (One reason I don't have much confidence is that each time I "level up", last time around this May, I realize how misguided my past efforts were, and how much time and effort it will take to develop the skillset necessary for the next step.) I don't expect to solve the whole problem (and I don't expect Eliezer or Marcello or Wei to solve the whole problem), but I do expect that over the years, some measure of progress can be made by mine and their efforts, and I expect other people will turn up (thanks to Eliezer's work on communicating the problem statement of FAI and new SIAI's work on spreading the word) whose contributions will be more significant.
Generally speaking, your argument isn't very persuasive unless you believe that the world is doomed without FAI and that direct FAI research is the only significant contribution you can make to saving it. (EDIT: To clarify slightly after your response, I mean to point out that you didn't directly mention these particular assumptions, and that I think many people take issue with them.)
My personal, rather uninformed belief is that FAI would be a source of enormous good, but it's not necessary for humanity to continue to grow and to overcome x-risk (so 3 is weaker); X may be contributing to the development of FAI, but not that much (so 4 is weaker); and other people engaged in productive pursuits are also contributing a non-zero amount to "save the world" (so 6 is weaker.)
As such, I have a hard time concluding that X's activity is anywhere near the "most important" using your reasoning, although it may be quite important.
The argument I gave doesn't include justification of things it assumes (that you referred to). It only serves to separate the issues with claims about a person from issues with claims about what's possible in the world. Both kinds of claims (assumptions in the argument I gave) could be argued with, but necessarily separately.
OK, I now see what your post was aimed at, a la this other post you made. I agree that criticism ought to be toward person X's beliefs about the world, not his conclusions about himself.