You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

syllogism comments on On the Importance of Systematic Biases in Science - Less Wrong Discussion

26 Post author: gwern 20 January 2013 09:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread. Show more comments above.

Comment author: syllogism 22 January 2013 11:21:14PM 1 point [-]

If you read any narrow/weak/specific/whatever AI papers, then I'd say you do read engineering papers --- that's how I mostly think of my field, computational linguistics, anyway.

The "experiments" I'm doing at the moment are attempts to engineer a better statistical parser of English. We have some human annotated data, and we divide it up into a training section, a development section, and an evaluation section. I write my system and use the training portion for learning, and evaluate my ideas on the development section. When I'm ready to publish, I produce a final score on the evaluation section.

In this case, my experimental error is the extent to which the accuracy figures I produce do not correlate with the accuracy that someone really using my system will see.

Both systematic and random error abounds in these "experiments". I'd say a really common source of systematic error comes from the linguistic annotation we're trying to replicate. We evaluate on data annotated by the same people according to the same standards as we trained on, and the scientific standards of the linguistics behind that are poor. If some aspects of the annotation are suboptimal for applications of the system, that won't be reflected in my results.