Science is simple enough that you can sic a bunch of people on a problem with a crib sheet and an "I can do science, me" attitude, and get a good enough answer early. The mental toolkit for applying Bayes is harder to give to people. I am right at the beggining approaching from a mentally lazy, slight psychological, and engineering background, when I first saw the word Bayes was in a certain Harry Potter fanfic a week or so ago. I failed the insightful tests in the early sequences, and caught myself noticing I was confused and not doing anything about it, and failed all over again in the next set of insightful tests. I have a way to go.
The time it takes for me to get a "I can do Bayes, me" attitude, even with a crib sheet, could have been spent solving a bunch of other problems.
If the choice is between science and Bayes, which at my low level of training I suspect is a false choice, then at the moment I would go science because I am better at it than Bayes. Like I type Qwerty not Dvorak, because I can type faster Qwerty even though Dvorak is (allegedly) better.
Given that each person at the moment has finite problem solving time, an argument could be made for applying Science to problems as it is easier to teach. That being said "I notice that I am confused" would have saved me a lot of trouble if I had heard of it earlier.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Sure.
More generally, if I don't want to optimize X, but merely want to satisfy some threshold T for X, then I don't really care what the optimal way of doing X is in general, I care what way of doing X gets me across T most cheaply. If getting across T using process P1 costs effort E1 from where I am now, and P2 costs E2, and E2 > E1, and I don't care about anything else, I should choose E1.
The catch is, like a lot of humans, I also have a tendency to overestimate both the effectiveness of whatever I'm used to doing and the costs of changing to something else. So it's very easy for me to dismiss P2 on the grounds of an argument like the above even in situations where E1 > E2, or where it turns out that I do care about other things, or both.
There are some techniques that help with countering that tendency. For example, it sometimes helps to ask myself from time to time whether, if I were starting from scratch, I would choose P1 or P2. (E.g. "if I were learning to type for the first time, would I learn Dvorak or Qwerty?"). Asking myself that question lets me at least consider which process I think is superior for my purposes, even if I subsequently turn around and ignore that judgment due to status-quo bias.
That isn't great, but it's better than failing to consider which process I think is better.
Well said. In considering your response I notice that a process P as part of its cost E has room to include the cost of learning the process if necessary, something that was concerning me.
I am now considering a more complicated case.
You are in a team of people of which you are not the team leader. Some of the team are scientists, some are magical thinkers, you are the only Bayesian.
Given an arbitrary task which can be better optimised using Bayesian thinking, is there a way of applying a "Bayes patch" to the work of your teammates so that they can benefit from the fruits of your Bayseian thinking without knowing it themselves?
I suppose I am trying to ask how easily or well is Bayes applied to undirected work by non-Bayesian operators. If I was a scientist in a group full of magical thinkers all of us with a task, I do not know what they would come up with but I reckon I would be able to make some scientific use of the information they generate, is the same the case for Bayes?