Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
If you meant those to be topical you've got your givens inverted.
No. Actually I don't. Those are base priors that are important when evaluating just how to update when given the new evidence "Elias asserts X". Have you not just been telling us that you teach others how this mathematics works?
I'm sorry, but no matter how many times you reiterate it, until such time as you can provide a valid justification for the assertion that appeals to authority are not always fallacious, each iteration will be no more correct than the last time.
Why are you saying that? I didn't just make the assertion again. I pointed to some of the Bayesian reasoning that it translates to.
If you have values for p(Elias asserts X | X is true) and p(Elias asserts X | X is false) that are not equal to each other and you gain information "Elias asserts X" your estimation for X is true must change or your thinking is just wrong. It is simple mathematics. That being the case supplying the information "Elias asserts X" to someone who already has information about Elias's expertise is not fallacious. It is supplying information to them that should change their mind.
The above applies regardless of whether Elias has ever written any works on the subject or in any way supplied arguments. It applies even if Elias himself has no idea why he has the intuition "X". It applies if Elias is a flipping black box that spits out statements. If you both you and the person you are speaking with have reason to believe p(Elias asserts X | X is true) > p(Elias asserts X | X is false) then supplying "Elias asserts X" as an argument in favour of X is not fallacious.
LessWrongers as a group are often accused of talking about rationality without putting it into practice (for an elaborated discussion of this see Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality). This behavior is particularly insidious because it is self-reinforcing: it will attract more armchair rationalists to LessWrong who will in turn reinforce the trend in an affective death spiral until LessWrong is a community of utilitarian apologists akin to the internet communities of anorexics who congratulate each other on their weight loss. It will be a community where instead of discussing practical ways to "overcome bias" (the original intent of the sequences) we discuss arcane decision theories, who gets to be in our CEV, and the most rational birthday presents (sound familiar?).
A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": accomplishing a set of practical well-defined goals to increment your rationalist "level". It's hard to see how these goals fit into a long-term plan to achieve anything besides self-improvement for its own sake. Indeed, the article begins by priming us with a renaissance-man inspired quote and stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"
So what's the solution? I don't know. However I can tell you a few things about the solution, whatever it may be:
Whatever you may decide to do, be sure it follows these principles. If none of your plans align with these guidelines then construct a new one, on the spot, immediately. Just do something: every moment you sit hundreds of thousands are dying and billions are suffering. Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.
I declare Crocker's rules on the writing style of this post.