IlyaShpitser comments on Open thread, Dec. 21 - Dec. 27, 2015 - Less Wrong

2 Post author: MrMind 21 December 2015 07:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (230)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 22 December 2015 07:03:22PM *  0 points [-]

As I said, the issue can be corrected for if the number of hypotheses is known, but not if the number of possibilities is unknown

You don't need to know the number, you need to know the model (which could have infinite hypotheses in it).

Your model (hypothesis set) could be specified by an infinite number of parameters, say "all possible means and variances of a Gaussian." You can have a prior on this space, which is a density. You update the density with evidence to get a new density. This is Bayesian stats 101. Why not just go read about it? Bishop's machine learning book is good.

Comment author: FrameBenignly 22 December 2015 07:07:06PM 0 points [-]

True, but working from a model is not an inductive method, so it can't be classified as confirmation through inductive inference which is what I'm criticizing.

Comment author: Lumifer 22 December 2015 07:16:40PM 2 points [-]

You are severely confused about the basics. Please unconfuse yourself before getting to the criticism stage.

Comment author: FrameBenignly 22 December 2015 07:35:19PM 0 points [-]

??? IlyaShpitser if I understand correctly is talking about creating a model of a prior, collecting evidence, and then determining whether the model is true or false. That's hypothesis testing, which is deduction; not induction.

Comment author: IlyaShpitser 22 December 2015 07:42:39PM 1 point [-]

You don't understand.

You have a (possibly infinite) set of hypotheses. You maintain beliefs about this set. As you get more data, your beliefs change. To maintain beliefs you need a distribution/density. To do that you need a model (a model is just a set of densities you consider). You may have a flexible model and let the data decide how flexible you want to be (non-parametric Bayes stuff, I don't know too much about it), but there's still a model.

Suggesting for the third and final time to get off the internet argument train and go read a book about Bayesian inference.

Comment author: FrameBenignly 22 December 2015 07:50:04PM 0 points [-]

Oh, sorry I misunderstood your argument. That's an interesting solution.

Comment author: gjm 22 December 2015 10:06:05PM 1 point [-]

That interesting solution is exactly what people doing Bayesian inference do. Any criticism you may have that doesn't apply to what Ilya describes isn't a criticism of Bayesian inference.

Comment author: IlyaShpitser 22 December 2015 07:35:00PM *  0 points [-]

As much as I hate to do it, I am going to have to agree with Lumifer, you sound confused. Go read Bishop.