Douglas_Knight comments on Mini-review: 'Proving History: Bayes' Theorem and the Quest for the Historical Jesus' - Less Wrong

18 Post author: lukeprog 01 February 2012 07:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread.

Comment author: Douglas_Knight 01 February 2012 08:36:52PM 2 points [-]

In my experience, people who get excited about bayesian methods and write about applying them to their own field do a terrible job, no better than those who get excited about any other. None of the details of this review move me from a prior of this book being scientism, considerably worse than the typical book about historical methods. Surely what a review of a book on methods needs is examples.

Comment author: lukeprog 01 February 2012 09:35:13PM *  6 points [-]

I would have liked to see Carrier team up with somebody like Andrew Gelman. That probably would have resulted in a better book on applying Bayes to historical method. But as it stands, Carrier's book is all we've got, and it ain't bad. Can you give an example of a "typical book about historical methods" that you think is pretty good?

Comment author: [deleted] 02 February 2012 01:43:48AM *  4 points [-]

I did a review of a bunch of Peter Turchin's work a couple years back. I could look it up and post it if people are interested. It isn't specifically Bayesian, but he does apply mathematical modeling and statistical analysis to social processes. I wasn't overly convinced by his methodology, but he did come to some interesting conclusions.

He's got a good amount of work that ISN'T behind a pay wall. Here's a sample.

Comment author: lukeprog 02 February 2012 01:55:21AM 2 points [-]

I am interested in that review of yours.

Comment author: [deleted] 02 February 2012 02:07:39AM 3 points [-]

It's long, so I put it in dropbox. This link should take you there. (If not, let me know. My dropbox skills are probably sub-par)

Comment author: gwern 04 June 2012 02:04:54AM *  1 point [-]

Interesting review, but I have to take exception to your last paragraph: I think Turchin is doing the right thing by only investigating a few selected variables (which he has substantial background reason for thinking of interest) as input into his models. Turning a neural network loose on every possible variable is just begging for massive datamining and multiple comparison problems which eliminate any validity you might hope to have for your results! Worse, if you use all your data initially, no one will be able to test your results for overfitting on any other data set...

Comment author: [deleted] 04 June 2012 02:14:16AM *  1 point [-]

Thanks for the feedback. I would guess you're probably right. My knowledge of data mining practices is actually pretty minimal.

The review, however, was written for a class, and so it is academically mandatory (i.e. "If you want an A you better...") to come up with problems with the original research and ways to improve. The professor seemed to like neural networks, so... (I think I inherited her "Just run everything through a neural network" mentality, but will definitely update my views based on your feedback. Thanks!)

Comment author: prase 01 February 2012 09:51:31PM 2 points [-]

Could you be more concrete? What are the typical failure modes of these people?

Comment author: David_Gerard 02 February 2012 03:13:41PM *  -1 points [-]

Which definition of "scientism" are you using? The Oxford Dictionary of Philosophy notes that the word is a term of abuse. Your comment appears to be a general-purpose collection of snarl words and phrases.