The main problem with crawlers is that their usage patterns don't match those of regular users, and most optimization effort is focused on the usage patterns of real users, so bots sometimes wind up using the site in ways that consume orders of magnitude more compute per request than a regular user would.
And Twitter has recently destroyed his API, I think? Which perhaps has the effect of de-optimizing the usage patterns of bots.
It's a nice analogy, but it all rests on whether infinite evidence is a thing or not, and there aren't arguments one way or the other here. (Sure, infinite evidence would mean "whatever log odds you come up with, this is even stronger", but that doesn't rule out it is a thing).
Like, how much evidence for the hypothesis "I'll perceive the die to come up a 4" does the event "Ok, die was thrown and I am perceiving it to be a 3" provide? Or how much evidence do I have of being conscious right now when I am feeling like something? I think any answer different from infinity is just playing a word game.
Aiming for convergence on truth. I guess it's true this might lead to a failure mode where one seeks for convergence more than anything else. But taken literally, this should not discourage exploring new wild hypotheses. If you are both equally wrong, by growing your uncertainty you get nearer to converging on truth.
I hate that I actually liked Answer to Job