Via Tyler Cowen, Max Albert has a paper critiquing Bayesian rationality.
It seems pretty shoddy to me, but I'd appreciate analysis here. The core claims seem more like word games than legitimate objections.
Via Tyler Cowen, Max Albert has a paper critiquing Bayesian rationality.
It seems pretty shoddy to me, but I'd appreciate analysis here. The core claims seem more like word games than legitimate objections.
The reason you feel most comfortable with a job (unless, like me, you're in the minority - a job would destroy my psyche) is that you've been brainwashed by many years of school, socialization and practice. I pick the word brainwashed carefully, because it's more than training or acclimation. It's something that's been taught to you by people who needed you to believe it was the way things are supposed to be.
-- Seth Godin
This sounds very Foucauldian, almost straight out of Discipline and Punish.
I'm not Seth Godin, by the way.
It seems there are a few meta-positions you have to hold before taking Bayesianism as talked about here; you need the concept of Winning first. Bayes is not sufficient for sanity, if you have, say, an anti-Occamian or anti-Laplacian prior.
What this site is for is to help us be good rationalists; to win. Bayesianism is the best candidate methodology for dealing with uncertainty. We even have theorems that show that in it's domain it's uniquely good. My understanding of what we mean by Bayesianism is updating in the light of new evidence, and updating correctly within the constraints of sanity (cf Dutch books).
We can discuss both epistemic and instrumental rationality.
I recently started working through this Applied Bayesian Statistics course material, which has done wonders for my understanding of Bayesianism vs. the bag-of-tricks statistics I learned in engineering school.
So I finally picked up a copy of Probability Theory: The Logic of Science, by E.T. Jaynes. It's pretty intimidating and technical, but I was surprised how much prose there is, which makes it surprisingly palatable. We should recommend this more here on Less Wrong.
I find excruciating honesty a worthy ideal, but not everyone is prepared for it. So, plainly describing everything you intend to signal and counter-signal might come off as eccentric, but worth doing if you can pull it off. It requires the right type of audience.
Eliezer, how is progress coming on the book on rationality? Will the body of it be the sequences here, but polished up? Do you have an ETA?
I don't claim that deontologists actually consciously think that this is why they're deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.
Indeed, I get the impression from the article that a deontologist is someone who makes moral choices based on whether they will feel bad about violating a moral injunction, or good for following it... and then either ignorantly or indignantly denies this is the case, treating the feeling as evidence of a moral judgment's truth, rather than as simply a cached response to prior experience.
Frankly, a big part of the work I do to help people is teaching them to shut off the compelling feelings attached to the explicit and implicit injunctions they picked up in childhood, so I'm definitely inclined to view deontology (at least as described by the article) as a hopelessly naive and tragically confused point of view, well below the sanity waterline... like any other belief in non-physical entities, rooted in mystery worship.
I also seem to recall that previous psychology research showed that that sort of thinking was something people naturally tended to grow out of as they got older (stages of moral reasoning), but then I also seem to recall that there was some more recent dispute about that, and accusations of gender bias in the research.
Nonetheless, it's evolutionarily plausible that we'd have a simple, injunction-based emotional trigger system used in early life, until our more sophisticated reasoning abilities come online. And my experience working with my own and other people's brains seems to support this: when broad childhood injunctions are switched off, people's behavior and judgments in the relevant area immediately become more flexible and sophisticated.
Unfortunately, the deontological view sounds like it's abusing higher reasoning simply to retroactively justify whatever (cached-feeling) injunctions are already in place, by finding more-sophisticated ways to spell the injunctions so they don't sound like they have anything to do with one's own past shames, guilts, fears, and other experiences. (What Robert Fritz refers to as an "ideal-belief-reality conflict", or what Shakespeare called, "The lady doth protest too much, methinks." I.e., we create high-sounding ideals and absolute moral injunctions specifically to conceal our personally-experienced failings or conflicts around those issues.)
Of course, I could just be missing the point of deontology entirely. But I can't seem to even guess at what that point would be, because everything I'm reading here seems to closely resemble something that I had to grow out of... making it really hard for me to take it seriously.
Yes! Both you and Kaj Sotala seem right on the money here. Deontology falls flat. A friend once observed to me that consequentialism is a more challenging stand to take because one needs to know more about any particular claim to defend an opinion about it.
I know it's been discussed here on Less Wrong, but Jonathan Haidt's research is really great, and relevant to this discussion. Professor Haidt's work has validated David Hume's assertions that we humans do not reason to our moral conclusions. Instead, we intuit about the morality of an action, and then provide shoddy reasoning as justification one way or the other.
Mike Gibson has a great and interesting question. How would Bayesian methodology address this? Might this be an information cascade?
I think the overjustification effect might be at play.
The overjustification effect occurs when an external incentive such as money or prizes decreases a person's intrinsic motivation to perform a task. According to self-perception theory, people pay more attention to the incentive, and less attention to the enjoyment and satisfaction that they receive from performing the activity. The overall effect is a shift in motivation to extrinsic factors and the undermining of pre-existing intrinsic motivation.
In this case, the reward is status. It's important to note that the person must anticipate the reward, though. People might explicitly seek status, but subconsciously seeking status might provide enough anticipation to create the effect.
I am taking Eliezer's definition of "stupidity" to mean increased incompetence in the field wherein the person gained status. In their field, we would expect high competence. Decreased competence in their field would come about from diminished interest in that field, from the overjustification effect.
I'm sorry; how is scientific knowledge a public good? Yes, it is nonrivalrous in consumption, but certainly not nonexcludable. Legitimate, peer-reviewed journals charge for subscriptions, individual issues, or even for individual articles online.