In a survey does an increase in rounding errors in estimators a problem? As long as there's no bias in how they get rounded we should be fine. If there is such a bias I'm curious what it is and what causes it.
I suspect it would have a strong bias towards obvious fractions and obvious multiples. That isn't a directional bias, but it's an anti-precise bias.
for a small price, [sell you] each of those
Typo.
Eliezer,
Write more like this.
Seconded. I can see how it won't be your highest priority, but if you have a spare moment and are looking for something to break the monotony...
Well now I'm torn. Damn it, in writing, "beisutsukai" looks far better than "beizutsukai" and it may even sound better.
You've already coined the word. Too late to change it!
Is "complementary" medicine the new euphemism for alternative/natural/Eastern/not-tested-with-science medicine? I haven't heard of it before.
Not so new, but yup.
You should survey superstition (astrology, bad luck avoidance, complementary medicine, etc).
Yes, it might be more useful to list some wedge issues that usually divide the parties in the US.
Those won't divide the parties outside the US. Every political party in Britain aside from the extreme fringe are for the availability of abortion and government provision of free healthcare, for example.
And things that do divide the parties here, like compulsory ID cards, don't divide the parties in the US.
I wonder if you'd get better probability values if you used AJAX slider controls for a continuous value between 0 and 1. Less chance of anchoring percentages on multiples of 10 and 5.
I'm kind of thinking of doing a series of posts gently spelling out step by step the arguments for Bayesian decision theory. Part of this is for myself: I've read a while back Omohundro's vulnerability argument, but felt there were missing bits that I had to personally fill in, assumptions I had to sit and think on before I could really say "yes, obviously that has to be true". Some things that I think I can generalize a bit or restate a bit, etc.
So as much as for myself, to organize and clear that up, as for others, I want to do a short series of "How not to be stupid (given unbounded computational power)" In which in each each post I focus on one or a small number of related rules/principles of Bayesian Decision theory and epistemic probabilities, and gently derive those from the "don't be stupid" principle. (Again, based on Omohundro's vulnerability arguments and the usual dutch book arguments for Bayesian stuff, but stretched out and filled in with the details that I personally felt the need to work out, that I felt were missing.)
And I want to do it as a series, rather than a single blob post so I can step by step focus on a small chunk of the problem and make it easier to reference related rules and so on.
Would this be of any use to anyone here though? (maybe a good sequence for beginners, to show one reason why Bayes and Decision Theory is the Right Way?) Or would it be more clutter than anything else?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Isn't a control system using feedback basically analogous to a look-up table? Feedbacks by themselves aren't optimizers, they're happenstance. Feedbacks that usefully seek a goal constitute the output of an optimization process that ran beforehand.