[Summary: Trying to use new ideas is more productive than trying to evaluate them.]
I haven't posted to LessWrong in a long time. I have a fan-fiction blog where I post theories about writing and literature. Topics don't overlap at all between the two websites (so far), but I prioritize posting there much higher than posting here, because responses seem more productive there.
The key difference, I think, is that people who read posts on LessWrong ask whether they're "true" or "false", while the writers who read my posts on writing want to write. If I say something that doesn't ring true to one of them, he's likely to say, "I don't think that's quite right; try changing X to Y," or, "When I'm in that situation, I find Z more helpful", or, "That doesn't cover all the cases, but if we expand your idea in this way..."
Whereas on LessWrong a more typical response would be, "Aha, I've found a case for which your step 7 fails! GOTCHA!"
It's always clear from the context of a writing blog why a piece of information might be useful. It often isn't clear how a LessWrong post might be useful. You could blame the author for not providing you with that context. Or, you could be pro-active and provide that context yourself, by thinking as you read a post about how it fits into the bigger framework of questions about rationality, utility, philosophy, ethics, and the future, and thinking about what questions and goals you have that it might be relevant to.
That is as concrete as I can make it, unless you want me to write out an algorithm for Gibbs sampling and explaining why it produces priors that maximize the posterior. Or give an example where I used it to do so. I can do that: I had a set of about 8 different databases I was using to assign functions to known proteins. I wanted to estimate the reliability of each database, as a probability that its annotation was correct. This set of 8 probabilities was the set of priors I sought. I had a set of about a hundred-thousand annotated proteins, and given a set of priors, I could produce the probability of the given set of 100,000 annotations. I used that dataset plus Gibbs sampling to produce those 8 priors. And it worked extraordinarily well.
Oh man, you're not doing yourself any favors in trying to shift my understanding of you. Not that I doubt that your algorithm worked well! Let me explain.
You've used a multilevel modelling scheme in which the estimands are the eight proportions. In general, in any multilevel model, the parameters at a given level determine the prior probabilities for the variables at the level immediately below. In your specific context, i.e., estimating these proportions, a fully Bayesian multilevel model would also have a prior distribution on those proportions (a so-cal... (read more)