by [anonymous]
4 min read31st Jul 20166 comments

4

Edited for clarity (hopefully) with thanks to Squirrell_in_Hell.

Lately, I find myself more and more interested in how the concept of "systematized winning" can be applied to large groups of people who have one thing in common, and that not even time, but a hobby or a general interest in a specific discipline. It doesn't seem (to me) to much trouble people working on their own individual qualities - performers, martial artists, managers (who would self-identify as belonging to these sets), but I am basing this on "general impressions" and will be glad to be corrected. It does seem to be a norm for some other sets, like sailors, who keep correcting maps every voyage.

The field in which I have been for some years (botany) does have something similar to what sailors do, which lets us to see how floras change over time, etc. However, different questions arise when novel sub-disciplines branch off the main trunk, and naturally, the people asking these new questions keep reaching back for some kind of pre-existing observations. And often they don't check how much weight can be assigned to these observations, which, I think, is a bad habit that won't lead to "winning".

It is not "industrial rationality" per se, but a distantly related thing, and I think we might have to recognize it somehow. Or at least, recognize that it requires different assumptions... No set victory, for example... Still, it probably matters to more living people than pure "industrial rationality" does, & ignoring it won't make it go away.

1. Information inheritance?

Questions change. For a very simply put example, they might go from "what grows here?" to "how is this plant different from the type specimen?" to [...] to "what plants indicate older habitats?" to "what grew here before?" The first one and the last one are very different problems. The first one might have been answered with pure agriculture/"traditional land use" in mind, and the last one - with timing plants' invasions, estimating tolerance to pollution and the "shell life" of seeds in soil. (And often, the data are patchy, collected by somebody who wanted to clarify something narrow and specific.)

It is not a feasible goal to ask future botanists to keep monitoring this-or-that habitat or solve the placing of such-and-such genus (although steps can be taken to make these particular problems attractive to them); and who knows what data they would have liked us to gather? Ecology still absorbs many individual reports of old field trips, which are hard to build into the overall structure beyond "a dot on the map", because questions have changed. What other disciplines would ask things we cannot anticipate, and is there anything we can do to pass on more than what we are looking for? Are there things we can confidently assume will turn out to be ruinous or beneficent? I am referring not to "statistics abuse" - people tend to at least say they shouldn't do that - but to narrower things which are still explainable-like-we-are-both-five. 

Hindsight is a great thing, for all its biasedness; so - are there any findings which keep drawing attention, and are there any common features to them? Here're my half-baked thoughts on the matter (I expect them to expand, but this is simply what I have right now).

2. Reviewing and expecting to be reviewed.

1) anything about living organisms from Antiquity to Enlightenment is likely to be referred to now and again, for lack of anything better.

2) recent findings about, say, non-random allocation of carbon isotopes in different body parts of plants are going to matter for a long time, having implications for determination of ages of various geological strata. Still, they and subsequent work will be all the more needed after we know more about carbon's geochemistry. So - isotopic analyses of as many body parts as possible should be given much consideration whenever possible, right? It's kind of sobering, to know that you can only do so much, and they might not even put any weight on it.

3. What will help?

I am generally very glad for any research which clarifies issues with established methodologies, while also telling us something so deliciously weird - how on Earth do plants segregate atoms on this level?! I suspect there are lots of findings with similarly far-reaching consequences...

And some of them have had their time and should be put to rest. Behind each number there's a method and an error, but worst of all, behind each number are the general work interests of the scientist who obtained it. Take Mr. Campbell, who together with two or three colleagues advanced life history of ferns about a century ago. He was a great microscopist, grower and naturalist in general, and we still quote him on, say, the duration of Ophioglossum gametophytes' development. But if you read his work, you will not see any precise measurement of the thing, because it was not the problem he was solving! He was busy setting apart Ophioglossaceae and most of the other ferns, and they are indeed apart enough that even were his estimate wrong by ten years, he would still be broadly in the right.

But it is wrong to keep quoting Campbell for a much different purpose of showing how much faster Ophioglossum gametophytes develop in vitro, because the "broadly in the right" situation is going to rapidly contract.

Sometimes, the non-quantifiable nature of the data actually makes it easier to review the records (like when some parson describes a stroll through the woods, naming this fern and that), but often, they are too precise.

4. Concluding remarks.

There might not be a general receipt, but so far as I can tell, biology (botany) would be much better off if people had more respect for purely methodological studies. And if previous generations' research priorities were more thoroughly weighed when their data were incorporated into truly novel fields. We can already identify the less solid findings from the past even without knowing the demands of the future. We can stop quoting classics without disclaimers. We can promote including methodological reviews in college textbooks, even on pain of removing some theory.

It's already imaginable.

 

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 2:05 PM

Let me give some feedback about your writing style, which I find consistently cryptic. You tend to describe your thoughts starting in the middle and giving the context later, or skipping it altogether. E.g. the fist sentence reads

I find myself more and more interested in how the concept of "systematized winning" can be applied to a large group of people who have one thing in common, and that not even time, but - in my own very personal case - ...

Until this point, a context like "biology research" etc. does not appear anywhere, and a "large group of people who have one thing in common" could be all people who like ice cream. It is of course possible to decipher what you mean, but by writing in reverse order you make it unnecessarily hard.

~~~

Possibly, a part of the problems you are describing could be solved by storing all the raw data that is collected during research, not just conclusions. In some cases, the amount of data might pose technological problems, but humanity's capacity to store information cheaply is increasing very quickly. So we can just let the future generations analyse the data by themselves, if they care to do so.

[-][anonymous]8y00

Edited a bit. If you could PM me a couple of words on what else to change, I'd be grateful.

I suspect part of the problem is Romashka's first language not being English, and s/he is adapting idioms in her/his native language translating them literally.

This was a useful article. Consider making it easier to find by submitting it to the main blog.

Thank you for writing up your insights.

[-][anonymous]7y00

Thank you. It is entertaining to think about research which is not 'bad' as criticised today, but inefficient due to a once obvious reason, or just incomplete enough to make more work for future scientists:)

It would possibly benefit a concept called triple blind, which I've learned about in the latest Slate Star Codex linkfest: it's about studies where the metodology is submitted for peer-review before beginning the experiment. This at least gives a better focus on the approach and the design of the project.