I wasn't talking about idea X itself, I was talking about the process of thought about idea X, we were discussing how smart EY is, and I used the specific type of thinking about X as a counter example to sanity waterline being raised in any way.
One can think about plumbing wrong, e.g. imagining that pipes grow as part of a pipe plant that must be ripe or the pipes will burst, even though pipes and valves and so on exist and can be thought of correctly, and plumbing is not an invalid idea. It doesn't matter to the argument I'm making, whenever AIs would foom (whenever pipes would burst at N bars). It only matters that the reasons for belief aren't valid, and aren't even close to being valid. (especially for the post-foom state)
edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough. You get bad grades for bad proofs, regardless of whenever things you proved were true or false! Some years of schools make you internalize that well enough. Now, the people whom didn't internalize this, they are very annoying to argue with. They keep asking that you prove the opposite, they do vague reasoning that's wrong everywhere and ask you to pinpoint a specific error, they ask you to tell them the better way to reason if you don't like how they reasoned about it (imagine this for Fermat's last theorem a couple decades ago, or now for P!=NP), they do every excuse they can think of, to disregard what you say on basis of some fallacy.
edit2: or rather, disregard the critique as 'not good enough', akin to disregarding critique on a flawed mathematical proof if the critique doesn't prove the theorem true or false. Anyway, I just realized that if I think that Eliezer is a quite successful sociopath who's scamming people for money, that results in higher expected utility for me reading his writings (more curiosity), than if I think he is a self deluded person and the profitability of belief is an accident.
edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough.
From personal experience, we got introduced to those in our 10th year (might have been 9th?), so I would have been 15 or 16 when I got introduced to the idea of formal proofs. The idea is fairly intuitive to me, but I also have a decent respect for people who seem to routinely produce correct answers via faulty reasoning.
I've spent so much time in the cogsci literature that I know the LW approach to rationality is basically the mainstream cogsci approach to rationality (plus some extra stuff about, e.g., language), but... do other people not know this? Do people one step removed from LessWrong — say, in the 'atheist' and 'skeptic' communities — not know this? If this is causing credibility problems in our broader community, it'd be relatively easy to show people that Less Wrong is not, in fact, a "fringe" approach to rationality.
For example, here's Oaksford & Chater in the second chapter to the (excellent) new Oxford Handbook of Thinking and Reasoning, the one on normative systems of rationality:
Is it meaningful to attempt to develop a general theory of rationality at all? We might tentatively suggest that it is a prima facie sign of irrationality to believe in alien abduction, or to will a sports team to win in order to increase their chance of victory. But these views or actions might be entirely rational, given suitably nonstandard background beliefs about other alien activity and the general efficacy of psychic powers. Irrationality may, though, be ascribed if there is a clash between a particular belief or behavior and such background assumptions. Thus, a thorough-going physicalist may, perhaps, be accused of irrationality if she simultaneously believes in psychic powers. A theory of rationality cannot, therefore, be viewed as clarifying either what people should believe or how people should act—but it can determine whether beliefs and behaviors are compatible. Similarly, a theory of rational choice cannot determine whether it is rational to smoke or to exercise daily; but it might clarify whether a particular choice is compatible with other beliefs and choices.
From this viewpoint, normative theories can be viewed as clarifying conditions of consistency… Logic can be viewed as studying the notion of consistency over beliefs. Probability… studies consistency over degrees of belief. Rational choice theory studies the consistency of beliefs and values with choices.
They go on to clarify that by probability they mean Bayesian probability theory, and by rational choice theory they mean Bayesian decision theory. You'll get the same account in the textbooks on the cogsci of rationality, e.g. Thinking and Deciding or Rational Choice in an Uncertain World.