It's an interesting idea, with some intuitive appeal. Also reminds me of a science fiction novel I read as a kid, the title of which currently escapes me, so the concept feels a bit mundane to me, in a way. The complexity argument is problematic, though--I guess one could assume some sort of per-universe Kolmogorov weighting of subjective experience, but that seems dubious without any other justification.
Suppose we had a G.O.D. that takes N bits of input, and uses the input as a starting-point for running a simulation. If the input contains more than one simulation-program, then it runs all of them.
Now suppose we had 2^N of these machines, each with a different input. The number of instantiations of any given simulation-program will be higher the shorter the program is (not just because a shorter bit-string is by itself more likely, but also because it can fit multiple times on one machine). Finally, if we are willing to let the number of machines shrink to zero, the same probability distribution will still hold. So a shorter program (i.e. more regular universe) is "more likely" than a longer/irregular one.
(All very speculative of course.)
If we see a significant number of instances where the conclusions of a widely-accepted paper are later debunked by a simple test, then we might begin to suspect that something like this is happening.
How so? Could you clarify your reasoning?
Scientists cite journals with conclusions that are convenient to cite (either because they corroborate their views or define a position to pivot from or argue with) whether or not they have been read. Journals with easily debunked conclusions might equivalently be not read (and thus unexamined) or read (and simply trusted).
I think that the real test for whether cited publications are read or not is the following: if a publication is consistently cited for a conclusion it does not actually present, then this is evidence of no one actually having read the publication.
I recall in my research that it was very convenient in the literature to cite one particular publication for a minor but foundational tenet in the field. However, when I finally got a hard-copy of the paper I couldn't find this idea explicitly written anywhere. The thing is -- contradicting what I say above, unfortunately -- I think the paper was well-read, but people don't double-check citations if the citation seems reasonable.
How so? Could you clarify your reasoning?
My thinking is: Given that a scientist has read (or looked at) a paper, they're more likely to cite it if it's correct and useful than if it's incorrect. (I'm assuming that affirmative citations are more common than "X & Y said Z but they're wrong because..." citations.) If that were all that happened, then the number of citations a paper gets would be strongly correlated with its correctness, and we would expect it to be rare for a bad paper to get a lot of citations. However, if we take into account the fact that citations are also used by other scientists as a reading list, then a paper that has already been cited a lot will be read by a lot of people, of whom some will cite it.
So when a paper is published, there are two forces affecting the number of citations it gets. First, the "badness effect" ("This paper sounds iffy, so I won't cite it") pushes down the number of citations. Second, the "popularity effect" (a lot of people have read the paper, so a lot of people are potential citers) pushes up the number of citations. The magnitude of the popularity effect depends mostly on what happens soon after publication, when readership is small and thus more subject to random variation. Of course, for blatantly erroneous papers the badness effect will still predominate, but in marginal cases (like the aphasia example) the popularity effect can swamp the badness effect. Hence we would expect to see more bad papers getting widely cited; the more obviously bad they are, the stronger this suggests the popularity effect is.
I suppose one could create a computer simulation if one were interested; I would predict results similar to Simkin & Roychowdhury's.
I am reminded of a paper by Simkin and Roychowdhury where they argued, on the basis of an analysis of misprints in scientific paper citations, that most scientists don't actually read the papers they cite, but instead just copy the citations from other papers. From this they show that the fact that some papers are widely cited in the literature can be explained by random chance alone.
Their evidence is not without flaws - the scientists might have just copied the citations for convenience, despite having actually read the papers. Still, we can easily imagine a similar effect arising if the scientists do read the papers they cite, but use the citation lists in other papers to direct their own reading. In that case, a paper that is read and cited once is more likely to be read and cited again, so a small number of papers acquire an unusual prominence independent of their inherent worth.
If we see a significant number of instances where the conclusions of a widely-accepted paper are later debunked by a simple test, then we might begin to suspect that something like this is happening.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Hi!
I've been registered for a few months now, but only rarely have I commented.
Perhaps I'm overly averse to loss of karma? "If you've never been downvoted, you're not commenting enough."