Comment author: TheOtherDave 04 May 2012 03:06:11PM 2 points [-]

Sure.

More generally, if I don't want to optimize X, but merely want to satisfy some threshold T for X, then I don't really care what the optimal way of doing X is in general, I care what way of doing X gets me across T most cheaply. If getting across T using process P1 costs effort E1 from where I am now, and P2 costs E2, and E2 > E1, and I don't care about anything else, I should choose E1.

The catch is, like a lot of humans, I also have a tendency to overestimate both the effectiveness of whatever I'm used to doing and the costs of changing to something else. So it's very easy for me to dismiss P2 on the grounds of an argument like the above even in situations where E1 > E2, or where it turns out that I do care about other things, or both.

There are some techniques that help with countering that tendency. For example, it sometimes helps to ask myself from time to time whether, if I were starting from scratch, I would choose P1 or P2. (E.g. "if I were learning to type for the first time, would I learn Dvorak or Qwerty?"). Asking myself that question lets me at least consider which process I think is superior for my purposes, even if I subsequently turn around and ignore that judgment due to status-quo bias.

That isn't great, but it's better than failing to consider which process I think is better.

Comment author: Untermensch 05 May 2012 11:58:36AM 0 points [-]

Well said. In considering your response I notice that a process P as part of its cost E has room to include the cost of learning the process if necessary, something that was concerning me.

I am now considering a more complicated case.

You are in a team of people of which you are not the team leader. Some of the team are scientists, some are magical thinkers, you are the only Bayesian.

Given an arbitrary task which can be better optimised using Bayesian thinking, is there a way of applying a "Bayes patch" to the work of your teammates so that they can benefit from the fruits of your Bayseian thinking without knowing it themselves?

I suppose I am trying to ask how easily or well is Bayes applied to undirected work by non-Bayesian operators. If I was a scientist in a group full of magical thinkers all of us with a task, I do not know what they would come up with but I reckon I would be able to make some scientific use of the information they generate, is the same the case for Bayes?

Comment author: Untermensch 04 May 2012 02:00:23PM 2 points [-]

Science is simple enough that you can sic a bunch of people on a problem with a crib sheet and an "I can do science, me" attitude, and get a good enough answer early. The mental toolkit for applying Bayes is harder to give to people. I am right at the beggining approaching from a mentally lazy, slight psychological, and engineering background, when I first saw the word Bayes was in a certain Harry Potter fanfic a week or so ago. I failed the insightful tests in the early sequences, and caught myself noticing I was confused and not doing anything about it, and failed all over again in the next set of insightful tests. I have a way to go.

The time it takes for me to get a "I can do Bayes, me" attitude, even with a crib sheet, could have been spent solving a bunch of other problems.

If the choice is between science and Bayes, which at my low level of training I suspect is a false choice, then at the moment I would go science because I am better at it than Bayes. Like I type Qwerty not Dvorak, because I can type faster Qwerty even though Dvorak is (allegedly) better.

Given that each person at the moment has finite problem solving time, an argument could be made for applying Science to problems as it is easier to teach. That being said "I notice that I am confused" would have saved me a lot of trouble if I had heard of it earlier.

Comment author: Untermensch 02 May 2012 12:35:09PM *  1 point [-]

Edit - I didn't read the premises correctly. I missed the importance of the bit "Your mind keeps drifting to the explanations you use on television, of why each event plausibly fits your market theory. But it rapidly becomes clear that plausibility can't help you here—all three events are plausible. Fittability to your pet market theory doesn't tell you how to divide your time. There's an uncrossable gap between your 100 minutes of time, which are conserved; versus your ability to explain how an outcome fits your theory, which is unlimited."

The time one spends preparing excuses is only loosely, and also inversley, linked to how easy to explain the event is. When unsure of an outcome to excuse what you are looking for is not the "most likely to be needed" excuse to be "really good" but for any excuse you need to be "as good as possible."

Even if your pet theory is so useless as to be utterly general, it should still be possible to estimate the easiest event to explain compared to any of the others, and that is where the least time should be spent. Failing that, if the events are all equally easy to explain with your pet theory then the time taken trying to work out where to spend the time would be better spent writing whichever explanation of up or down you think most likely of the two until it is as good as you can get it in less than half the time, then do the same for the other, then a few minutes at the end saying how these cancel if the market stays same or similar,

Better would be write a long list of excuses with predicted up and down values, and use them to get a range of levels of upnesses and downnesses that you can combine any number of to excuse any specific levels of up and downness. "Normally the reserve announcement would have had a huge upwards effect on the market, but because it was rainy today and baked beans are on sale in Wal Mart this is reflected in the only slight increase seen when looking at the market as a whole" This way you can even justify trends right up till the moment of truth "Earlier in the day the market was dropping due to the anticipated reseve announcement, but once it was discovered that Bolivia was experiencing solar flares, this slowed the downward trend, with the floating of shares in Greenpeace flinging the market back up again"

Lets use something more 'predictable' for illustrative purposes, you are a physics teacher in 1960/70's America, some serious looking people in suits turn up at your door, their pet scientist and all his notes were disappeared by the Reds and your country needs you.

After the time wasted arguing that it was insane to even ask you to do this you have both a gun to your head and 100 minutes left to come up with excuses as to why the "Hammer and Feather on the Moon" experiment went in any of the three ways*. Given that you have good reason to believe that the Hammer and Feather experiment may not go as you predict, spending 99.99 % of your time on the obvious answer is a very unwise use of your time resource. In fact it may be wiser to spend 1 minute on the obvious answer to have more time to try to excuse the feather hitting first.

*Turns out that the president had been told Russian telekinetics were going to mess with the results of the experiment to make Americans believe the moon landings had been faked, or, if you pefer, perhaps they were worried that the props department in Area 51 hadn't got the tensions on the invisible wires right, yet...

View more: Prev