Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

JoshuaZ comments on Bayes' Theorem Illustrated (My Way) - Less Wrong

126 Post author: komponisto 03 June 2010 04:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (191)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 04 June 2010 03:16:46AM 0 points [-]

Well for example, if you have a situation where the evidence leads you to believe that something is true, and there is an easy, simple, reliable test to prove its not true, why would the bayesian method waste its time? Immagine you witness something which could be possible, but its extremely odd. Like gravity not working or something. It could be a hallucinations, or a glitch if your talking about a computer, and there might be an easy way to prove it is or isn't. Under either scenerio, whether its a hallucination or reality is just weird, it makes an assumption and then has no reason to prove whether this is correct. Actually, that might have been a bad example, but pretty much every scenario you can think of, where making an assumption can be a bad thing and you can test the assumptions, would work.

If there is an "easy, simple, reliable test" to determine the claim's truth within a high confidence, why do you think a Bayesian wouldn't make that test?

Well if you can't program a viable AI out of it, then its not a universal truth to rationality.

Can you expand your logic for this? In particular, it seems like you are using a definition of "universal truth to rationality" which needs to be expanded out.

Comment author: Houshalter 04 June 2010 01:06:11PM 0 points [-]

If there is an "easy, simple, reliable test" to determine the claim's truth within a high confidence, why do you think a Bayesian wouldn't make that test?

Because its not a decision making theory, but a one that judges probability. The bayesian method will examine what it has, and decide the probability of different situations. Other then that, it doesn't actually do anything. It takes an entirely different system to actually act on the information given. If it is a simple system and just assumes to be correct whichever one has the highest probability, then it isn't going to bother testing it.

Comment author: JoshuaZ 04 June 2010 01:36:50PM *  1 point [-]

The bayesian method will examine what it has, and decide the probability of different situations. Other then that, it doesn't actually do anything. It takes an entirely different system to actually act on the information given. If it is a simple system and just assumes to be correct whichever one has the highest probability, then it isn't going to bother testing it.

But a Bayesian won't assume which one has the highest probability is correct. That's the one of the whole points of a Bayesian approach, every claim is probabilistic. If one claim is more likely than another, the Bayesian isn't going to lie to itself and say that the most probable claim now has a probability of 1. That's not Bayesianism. You seem to be engaging in what may be a form of the mind projection fallacy, in that humans often take what seems to be a high probability claim and then treat it like it has a much, much higher probability (this is due to a variety of cognitive biases such as confirmation bias and belief overkill). A good Bayesian doesn't do that. I don't know where you are getting this notion of a "simple system" that did that. If it did, it wouldn't be a Bayesian.

Comment author: Houshalter 04 June 2010 02:31:19PM 0 points [-]

But a Bayesian wont' assume which one has the highest probability is correct. That's the one of the whole points of a Bayesian approach, every claim is probabilistic. If one claim is more likely than another, the Bayesian isn't going to lie to itself and say that the most probable claim now has a probability of 1. That's not Bayesianism. You seem to be engaging in what may be a form of the mind projection fallacy, in that humans often take what seems to be a high probability claim and then treat it like it has a much, much higher probability (this is due to a variety of cognitive biases such as confirmation bias and belief overkill). A good Bayesian doesn't do that. I don't know where you are getting this notion of a "simple system" that did that. If it did, it wouldn't be a Bayesian.

I'm not exactly sure what you mean by all of this. How does a bayesian system make decisions if not by just going on its most probable hypothesis?

Comment author: jimrandomh 04 June 2010 03:04:35PM 6 points [-]

To make decisions, you combine probability estimates of outcomes with a utility function, and maximize expected utility. A possibility with very low probability may nevertheless change a decision, if that possibility has a large enough effect on utility.

Comment author: Houshalter 04 June 2010 03:41:46PM -1 points [-]

See the reply I made to AlephNeil. Also, this still doesn't change my scenario. If theres a way to test a hypothesis, I see no reason the bayesian method ever would, even if it seems like common sense to look before you leap.

Anyone know why I can only post comments every 8 minutes? Is the bandwidth really that bad?

Comment author: jimrandomh 04 June 2010 03:56:05PM *  3 points [-]

Bayesianism is only a predictor; it gets you from prior probabilities plus evidence to posterior probabilities. You can use it to evaluate the likelihood of statements about the outcomes of actions, but it will only ever give you probabilities, not normative statements about what you should or shouldn't do, or what you should or shouldn't test. To answer those questions, you need to add a decision theory, which lets you reason from a utility function plus a predictor to a strategy, and a utility function, which takes a description of an outcome and assigns a score indicating how much you like it.

The rate-limit on posting isn't because of bandwidth, it's to defend against spammers who might otherwise try to use scripts to post on every thread at once. I believe it goes away with karma, but I don't know what the threshold is.

Comment author: SilasBarta 04 June 2010 04:11:31PM 2 points [-]

Anyone know why I can only post comments every 8 minutes? Is the bandwidth really that bad?

You face limits on your rate of posting if you're at or below 0 karma, which seems to be the case for you. How you got modded down so much, I'm not so sure of.

Comment author: thomblake 04 June 2010 04:22:19PM 3 points [-]

How you got modded down so much, I'm not so sure of.

Bold, unjustified political claims. Bold, unjustified claims that go against consensus. Bad spelling/grammar. Also a Christian, but those comments don't seem to be negative karma.

Comment author: Christian_Szegedy 04 June 2010 07:45:48PM *  7 points [-]

I can attest the being Christian itself does not seem to make a negative difference. :D

Comment author: Blueberry 04 June 2010 08:56:52PM 1 point [-]

Upvoted. That took me a minute to get.

Comment author: SilasBarta 04 June 2010 04:27:09PM 0 points [-]

Yeah, I hadn't been following Houshalter very closely, and the few that I did see weren't about politics, and seemed at least somewhat reasonable. (Maybe I should have checked the posting history, but I was just saying I'm not sure, not that the opposite would be preferable.)

Comment author: Houshalter 04 June 2010 06:03:06PM *  -2 points [-]

Bold, unjustified political claims.

What bold unjustified political claims? You do realise that every other person on this site I've met so far has some kind of extreme political view. I thought I was kind of reasonable.

Bold, unjustified claims that go against consensus.

In other words, I disagreed with you. I always look for the reasons to doubt something or believe in something else before I just "go along with it".

Bad spelling/grammar.

What's wrong with my spelling/grammar? I double check everything before I post it!

Also a Christian

You're persecuting me because of my religion!?

Whatever. I'll post again in 8 minutes I guess.

Comment author: Alicorn 04 June 2010 06:07:37PM *  7 points [-]

Whats wrong with my spelling/grammar? I double check everything before I post it!

In this comment:

Whats -> What's

Your -> You're

Also, arguably a missing comma before "I guess".

Comment author: thomblake 04 June 2010 06:27:04PM 4 points [-]

What bold unjustified political claims? You do realise that every other person on this site I've met so far has some kind of extreme political view. I thought I was kind of reasonable.

Emphasis on 'unjustified'. Example. This sounds awfully flippant and sure of yourself - "This system wouldn't work at all". Why do you suppose so many people, including professional political scientists / political philosophers / philosophers of law think that it would work? Do you have an amazing insight that they're all missing? Sure, there are people with many different positions on this issue, but unless you're actually going to join the debate and give solid reasons, you weren't really contributing anything with this comment.

Also, comments on political issues are discouraged, as politics is the mind-killer. Unless you're really sure your political comment is appropriate, hold off on posting it. And if you're really sure your political comment is too important not to post, you should check to make sure you're being rational, as that's a good sign you're not.

In other words, I disagreed with you. I always look for the reasons to doubt something or believe in something else before I just "go along with it".

Again, emphasis on 'unjustified'. If people here believe something, there are usually very good reasons for it. Going against that without at least attempting a justification is not recommended. Here are hundreds of people who have spent years trying to understand how to, in general, be correct about things, and they have managed to reach agreement on some issues. You should be shaken by that, unless you know precisely where they've all gone wrong, and in that case you should say so. If you're right, they'll all change their minds.

Also a Christian

Your[sic] persecuting me because of my religion!?

You've indicated you have false beliefs. That is a point against you. Also if you think the world is flat, the moon is made of green cheese, or 2+2=3, and don't manage to fix that when someone tells you you're wrong, rationalists will have a lower opinion of you. If you manage to convince them that 2+2=3, then you win back more points than you've lost, but it's probably not worth the try.

Comment author: JoshuaZ 04 June 2010 06:24:06PM *  4 points [-]

Bold, unjustified claims that go against consensus.

In other words, I disagreed with you. I always look for the reasons to doubt something or believe in something else before I just "go along with it".

No. In other words, you've made claims that assume statements against consensus, often without even realizing it or giving any justification when you do so. As I already explained to you, the general approach at LW has been hashed out quite a bit. Some people (such as myself) disagree with a fair bit. For example, I'm much closer to being a traditional rationalist than a Bayesian rationalist and I also assign a very low probability to a Singularity-type event. But I'm aware enough to know when I'm operating under non-consensus views so I'm careful to be explicit about what those views are and if necessary, note why I have them. I'm not the only such example. Alicorn for example (who also replied to this post) has views on morality that are a distinct minority in LW, but Alicorn is careful whenever these come up to reason carefully and make her premises explicit. Thus, the comments are far more likely to be voted up than down.

Your persecuting me because of my religion!?

Well, for the people complaining about grammar: "Your" -> "You're"

But no, you've only mentioned your religious views twice I think, and once in passing. The votes down there were I'm pretty sure because your personal religious viewpoint was utterly besides the point being made about the general LW consensus.

Comment author: Blueberry 04 June 2010 08:57:40PM 1 point [-]

How you got modded down so much, I'm not so sure of.

I'm guessing that confusing "too" and "to", and "its" and "it's", contributed.

Comment author: JoshuaZ 04 June 2010 04:41:57PM 0 points [-]

By the same reason you were incorrect in your reply to AlephNeil, performing experiments can increase utility if what course of action is optimal is dependent on which hypothesis is most likely.

Comment author: Houshalter 04 June 2010 06:27:26PM -1 points [-]

If your utility function's goal is to get the most accurate hypothesis (not act on it) sure. Otherwise, why waste its time testing something that it already believes is true? If your goal is to get the highest "utility" as possible, then wasting time or resources, no matter how small, is inefficient. This means that your moving the blame off the bayesian end and to the "utility function", but its still a problem.

Comment author: JoshuaZ 04 June 2010 06:38:44PM *  4 points [-]

If your utility function's goal is to get the most accurate hypothesis (not act on it) sure. Otherwise, why waste its time testing something that it already believes is true? If your goal is to get the highest "utility" as possible, then wasting time or resources, no matter how small, is inefficient. This means that your moving the blame off the bayesian end and to the "utility function", but its still a problem.

But you don't believe it is true; there's some probability associated with it. Consider for example, the following situation. Your friend rolls a standard pair of 6 sided dice without you seeing them. If you guess the correct total you get $1000. Now, it is clear that your best guess is to guess 7 since that is the most common outcome. So you guess 7 and 1/6th of the time you get it right.

Now, suppose you have the slightly different game where before you make your guess, you may pay your friend $1 and the friend will tell you the lowest number that appeared. You seem to think that for some reason a Bayesian wouldn't do this because they already know that 7 is most likely. But of course they would, because paying the $1 increases their expected pay-off.

In general, increasing the accuracy of your map of the universe is likely to increase your utility. Sometimes it isn't, and so we don't bother. Neither a Bayesian rationalist nor a traditional rationalist is going to try to say count all the bricks on the facade of their apartment building even though it increases the accuracy of their model. Because this isn't an interesting piece of the model that is at all likely to tell anything useful compared to other limited forms. If one was an immortal and really running low on things to do, maybe counting that would be a high priority.

Comment author: Houshalter 10 June 2010 02:26:11AM 0 points [-]

Allright, consider a situation where there is a very very small probability that something will work, but it gives infinite utility (or at least extrordinarily large.) The risk for doing it is also really high, but because it is finite, the bayesian utility function will evaluate it as acceptable because of the infinite reward involved. On paper, this works out. If you do it enough times, you succeed and after you subtract the total cost from all those other times, you still have infinity. But in practice most people consider this a very bad course of action. The risk can be very high, perhaps your life, so even the traditional rationalist would avoid doing this. Do you see where the problem is? It's the fact that you only get a finite number of tries in reality, but the bayesian utility function calculates it as though you did it an infinite number of times and gives you the net utility.

Comment author: thomblake 04 June 2010 06:30:13PM 1 point [-]

Otherwise, why waste its time testing something that it already believes is true?

Because it might be false. If your utility function requires you to collect green cheese, and so you want to make a plan to go to the moon to collect the green cheese, you should know how much you'll have to spend getting to the moon, and what the moon is actually made of. And so it is written, "If you fail to achieve a correct answer, it is futile to protest that you acted with propriety."

Comment author: AlephNeil 04 June 2010 03:05:17PM 5 points [-]

You try to maximize your expected utility. Perhaps having done your calculations, you think that action X has a 5/6 chance of earning you £1 and a 1/6 chance of killing you (perhaps someone's promised you £1 if you play Russian Roulette).

Presumably you don't base your decision entirely on the most likely outcome.

Comment author: Houshalter 04 June 2010 03:19:41PM -1 points [-]

So in this scenario you have to decide how much your life is worth in money. You can go home and not take any chance of dying or risk a 1/6 chance to earn X amount of money. Its an extension on the risk/reward problem basically, and you have to decide how much risk is worth in money before you can complete it. Thats a problem, because as far as I know, bayesianism doesn't cover that.

Comment author: AlephNeil 04 June 2010 03:39:48PM *  7 points [-]
  1. It's not the job of 'Bayesianism' to tell you what your utility function is.

  2. This [by which I mean, "the question of where the agent's utility function comes from"] doesn't have anything to do with the question of whether Bayesian decision-making takes account of more than just the most probable hypothesis.