Today, I talked with somebody about reading speed. I asked him how fast he can read. He didn't answer, instead he said that the concept is abused by people. He said, it's more complicated than to say that you read at a certain reading speed. It depends on if you're reading a novel, a history textbook, or a poem.
I feel like he was falling into a kind of fallacy. He observed that a concept isn't entirely coherent, rejected the concept. However, the concept of reading speed seems real. It seems to capture something about reality.
This becomes obvious once you think about an experiment where we have two people that read the same material and time them. I read "The adventures of the Lightcone team" (or whatever it is called) together with Chu. We made the bed that I can get more than halfway through the book, before she finishes. I bet $5 on that. When she was finished, I almost managed to get halfway through the book. I was trying to read really fast, at the edge of comprehensibility.
Clearly there are latent causes, in each of our brains, that determine how fast we can read while still comprehending the text.
Trying to operationalize the concept that you're talking about, and imagining what sorts of experiments you would try to run to measure it, might be a good general way to avoid the fallacy of dropping a concept and losing its true kernel. Often you don't even need to run the experiment. Imagining it is sufficient.
Edit: see also this follow up comment.
The problem with the concept of reading speed is that focusing on it as a scalar metric led to speed reading literature that recommends a bunch of habits that seem to reduce text comprehension. Optimizing for reading as fast as possible while having a sense of subjective comprehensibility doesn't seem useful.
One corollary of the GPT-based language models is that it should now be easier to create an app that actually trains reading comprehension by autogenerating questions after reading a text.