PrawnOfFate comments on Welcome to Less Wrong! (5th thread, March 2013) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1750)
Hi everyone,
I'm a humanities PhD who's been reading Eliezer for a few years, and who's been checking out LessWrong for a few months. I'm well-versed in the rhetorical dark arts, due to my current education, but I also have a BA in Economics (yet math is still my weakest suit). The point is, I like facts despite the deconstructivist tendency of humanities since the eighties. Now is a good time for hard-data approaches to the humanities. I want to join that party. My heart's desire is to workshop research methods with the LW community.
It may break protocol, but I'd like to offer a preview of my project in this introduction. I'm interested in associating the details of print production with an unnamed aesthetic object, which we'll presently call the Big Book, and which is the source of all of our evidence. The Big Book had multiple unknown sites of production, which we'll call Print Shop(s) [1-n]. I'm interested in pinning down which parts of the Big Book were made in which Print Shop. Print Shop 1 has Tools (1), and those Tools (1) leave unintended Marks in the Big Book. Likewise with Print Shop 2 and their Tools (2). Unfortunately, people in the present don't know which Print Shop had which Tools. Even worse, multiple sets of Tools can leave similar Marks.
The most obvious solution that I can see is
If nothing else, this method can define the n-number of Print Shops responsible for the Big Book.
The Bayesian twist on the obvious solution is to add some testing onto the associations, above. Specifically,
find some books strongly associated with Print Shops [x,y,z], in order to
assign probability of patterns of Marks to each Print Shop, then
revise initial associations between Print Shops [x,y,z] and the Big Book proportionally.
I'm far from an expert in Bayesian methods, but it seems already that there's something missing here. Is there some stage where I should take a control sample? Also, how can I find a logical basis for the initial association step, when there are many potential Print Shops? Lastly, how can I account for the decay of Tools, thus increasing Marks, over time?
How about talking clearly about whatever you are currently hinting at?
I dunno, I find the complexity-hiding capitalized nouns things strangely attractive. Maybe there should be more capitalized nouns. Why isn't Sheets capitalized?
This is probably coming back to my fascination with graph theory, which has similar but even more exotic terminology. "A spider is a subdivision of a star, which is a kind of tree made up only of leaves and a root; a star with three arcs is called a claw."
I was openly warned by a professor (who will likely be on the dissertation committee) not to talk about this project widely.
The capitalized nouns are to highlight key terms. I believe the current description is specific enough to describe the situation accurately and without misleading people, but not too specific to break my professor's (correct) advice.
Have I broken LW protocol? Obviously, I'm new here.
Did they say why?
Yes. He said that I should be careful about sharing my project because, otherwise, I'll be reading about it in a journal in a few months. His warning may exaggerate the likelihood of a rival researcher and mis-value the expansion of knowledge, but I'm deferring to him as a concession of my ignorance, especially regarding rules of the academy.
"Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats."
This is heavily context-dependent. Many fields are idea-rich and implementation-poor, in which case you do have to ram ideas down people's throats, because there's a glut of other ideas you have to compete against. But in fields that are implementation-rich and idea-poor, ideas should be guarded until you've implemented them. There are no doubt academic fields where the latter case applies.
Can you name any?
I've been privately told of several such cases in high-energy physics. Below is an excerpt from the Politzer's Nobel lecture. He discovered Asymptotic freedom (that quarks are essentially connected by the miniature rubber bands which have no tension when the quarks are close to each other).
He does not explicitly say that Gross was tipped off, but it's easy to read between the lines. The rest of his lecture, titled The Dilemma Of Attribution is also worth reading.
It may be more precise to say there are academic groups to which that description applies, and that discretion is worthwhile in their proximity. Examples of those still living will remain private for obvious reasons.
I think Gwern's right on this.
But Humanities has rejected that!
Yep. It's not the Bible. I suspect that there are already good stats compiled on the Q-source, etc.
In a way it's not only futile but limiting to play the guessing game. There are lots of possible applications of Bayesian methods to the humanities. Maybe this discussion will help more projects than my own.
Ah, OK. They hadn't when I wrote it.
That was my first thought too; there's a huge textual analysis tradition relating to the Bible and what I know of it maps pretty closely to the summary, although it's also mature enough that there wouldn't be much reason to obfuscate it like this. But it's not implausible that it applies to some other body of literature. I understand there are some similar things going on in classics, for example.
The specifics shouldn't matter too much, though. Although some types of mark are going to be a lot more machine-distinguishable than others, and that's going to affect the kinds of analysis you can do -- differences in spelling and grammar, for example, are far machine-friendlier than differences in letterforms in a manuscript.
Thanks for the feedback. I actually cleared up the technical language considerably. I don't think there's any need to get lost in the weeds of the specifics while I'm still hammering out the method.