I do think that Bayesian has a better ring to it than, say, Bayesiologist, or Bayesonomer. It is probably healthy to have a strong gut reaction against -isms, but it seems like a good idea to have a shorthand word that basically tells the people of LW that you are aware of priors odds and how they interplay with posteriors (that can actually be taken as rather raunchy... I think I'll keep it there), see probabilities as statements of subjective belief, and are trying to find ways to bring that simple mathematical statement, and all of its consequences, into your being.
It might not be the best short-hand way of telling someone what stage of the journey you are on, but at least they know that you've left the gate. I kind of see it like how I saw myself as an undergraduate: I wasn't a mathematician, I was an aspiring mathematician. Now that I've graduated, and haven't done much heavy math lifting, I might call myself a lapsed mathematician. These terms may sound icky and religious in some anti-rational circles, but in a rationalist sphere, it is more of a useful way to tell people what you're thinking right now, and what you've been thinking recently.
Although, to think about it... it might be awesome if we took to titles like old school royalty, where every topic (instead of nation) you master gets added to your name. I am Beriukay, Lord Of The Realm Of Dark Sun, Ravager Of Creationism, Wielder Of The Twin Blades Of QWERTY And Dvorak, Bringer Of De Morgan's Holy Flame... . That wouldn't look bad in any of the Bayesian Conspiracy fictions, and while verbose, would be a lot more accurate and fun than just calling ourselves Bayesians or whatever.
On the other hand, a lot of the basic ideas of rationality need Bayes theorem to justify them. In particular, something can only be demonstrated to be a bias with respect to the Bayesian answer. Without understanding probability a lot of the advice in the sequences would seem like arbitrary rules handed down from above.
Of course, these arbitrary rules would actually work, but I'm not sure if that's the best way to teach people.
On the other hand, a lot of the basic ideas of rationality need Bayes theorem to justify them. ...
Not true. Theorem:Bayes is simply the result of more fundamental information-theoretic heuristics, which themselves would be capable, for the same reasons, of generating the same ideas of rationality -- though it would probably require a longer inferential path, which is why Theorem:Bayes seems like it's the grounding principle (rather than the best current operationalism of the true grounding principle).
The use of probabilities itself results from the same heuristics. These grounding heuristics form what I have called "correct reasoning". "Correct reasoning" is the (meta-)heuristic that says a being should use precisely the heuristics that it would be forced to use after starting from an arbitrarily wrong belief set and encountering arbitrarily many instances of informative evidence.
(If one recognizes what heuristics one will move to on encounters with more evidence, one can move to them without waiting for the evidence to arise, on pain of excessively slow updating.)
In a conflict between correct reasoning and Theorem:Bayes, correct reasoning should take precedence.
Therefore, the humans here should say that they are "correct reasoners", not Bayesians.
This may seem a bit off-topic, but is 'Bayesian' pronounced "Bay-EE-shin", "BAY-zee-an", or "BAY-shin"?
The second is the best. The third has some popularity, but I don't like it because it disguises the morphology. The fellow's name was Bayes; the adjective should thus be Bayes-ian, not Baye-sian.
I say the last except I pronounce the sh a bit like a French J or the 'g' in "mirage". Just like "Cartesian". I've never heard the first. The second I've heard but I don't like as much.
Most of the insights available on LessWrong don't require people to understand Bayes' Theorem (or timeless decision theory).
"What you believe after seeing the evidence depends on what you believe before seeing the evidence" is, I think, a decent "layman paraphrase" of Bayes' theorem that I think people do need to understand.
On TDT, I agree; I regard that as more of a specialized AI topic.
Most of the insights available on LessWrong don't require people to understand Bayes' Theorem (or timeless decision theory).
"What you believe after seeing the evidence depends on what you believe before seeing the evidence" is, I think, a decent "layman paraphrase" of (one aspect of) Bayes' theorem that I think people do need to understand. "Don't forget to consider other ways the data could have arisen" is another.
On TDT, I agree; I regard that as more of a specialized AI topic.
As I understand it, outside of LessWrong, "Bayesians" are individuals who promote some non-frequentist variety of probability theory -- you know, statisticians, philosophers, academics, etc. -- and use it to solve problems in their respective fields.
But when I read the term around these parts, it usually means one who has an almost therapeutic view of (subjective) Bayesian probability theory, believing that they can use it to refine their own rational faculties in the direction of an ideal Bayesian brain.
Related (and maybe of help to newcomers): Kaj's Feb 2010 article What is Bayesianism?, a summary of basic material.
Related: Kaj's Feb 2010 article What is Bayesianism?, an attempt to summarize basic material.
(Is Bayesianism even a word? Should it be? The suffix "ism" sets off warning lights for me.)
Visitors to LessWrong may come away with the impression that they need to be Bayesians to be rational, or to fit in here. But most people are a long way from the point where learning Bayesian thought patterns is the most time-effective thing they can do to improve their rationality. Most of the insights available on LessWrong don't require people to understand Bayes' Theorem (or timeless decision theory).
I'm not calling for any specific change. Just to keep this in mind when writing things in the Wiki, or constructing a rationality workbook.