AlephNeil comments on Bayes' Theorem Illustrated (My Way) - Less Wrong

126 Post author: komponisto 03 June 2010 04:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (191)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 04 June 2010 01:36:50PM *  1 point [-]

The bayesian method will examine what it has, and decide the probability of different situations. Other then that, it doesn't actually do anything. It takes an entirely different system to actually act on the information given. If it is a simple system and just assumes to be correct whichever one has the highest probability, then it isn't going to bother testing it.

But a Bayesian won't assume which one has the highest probability is correct. That's the one of the whole points of a Bayesian approach, every claim is probabilistic. If one claim is more likely than another, the Bayesian isn't going to lie to itself and say that the most probable claim now has a probability of 1. That's not Bayesianism. You seem to be engaging in what may be a form of the mind projection fallacy, in that humans often take what seems to be a high probability claim and then treat it like it has a much, much higher probability (this is due to a variety of cognitive biases such as confirmation bias and belief overkill). A good Bayesian doesn't do that. I don't know where you are getting this notion of a "simple system" that did that. If it did, it wouldn't be a Bayesian.

Comment author: Houshalter 04 June 2010 02:31:19PM 0 points [-]

But a Bayesian wont' assume which one has the highest probability is correct. That's the one of the whole points of a Bayesian approach, every claim is probabilistic. If one claim is more likely than another, the Bayesian isn't going to lie to itself and say that the most probable claim now has a probability of 1. That's not Bayesianism. You seem to be engaging in what may be a form of the mind projection fallacy, in that humans often take what seems to be a high probability claim and then treat it like it has a much, much higher probability (this is due to a variety of cognitive biases such as confirmation bias and belief overkill). A good Bayesian doesn't do that. I don't know where you are getting this notion of a "simple system" that did that. If it did, it wouldn't be a Bayesian.

I'm not exactly sure what you mean by all of this. How does a bayesian system make decisions if not by just going on its most probable hypothesis?

Comment author: AlephNeil 04 June 2010 03:05:17PM 5 points [-]

You try to maximize your expected utility. Perhaps having done your calculations, you think that action X has a 5/6 chance of earning you £1 and a 1/6 chance of killing you (perhaps someone's promised you £1 if you play Russian Roulette).

Presumably you don't base your decision entirely on the most likely outcome.

Comment author: Houshalter 04 June 2010 03:19:41PM -1 points [-]

So in this scenario you have to decide how much your life is worth in money. You can go home and not take any chance of dying or risk a 1/6 chance to earn X amount of money. Its an extension on the risk/reward problem basically, and you have to decide how much risk is worth in money before you can complete it. Thats a problem, because as far as I know, bayesianism doesn't cover that.

Comment author: AlephNeil 04 June 2010 03:39:48PM *  7 points [-]
  1. It's not the job of 'Bayesianism' to tell you what your utility function is.

  2. This [by which I mean, "the question of where the agent's utility function comes from"] doesn't have anything to do with the question of whether Bayesian decision-making takes account of more than just the most probable hypothesis.