All of almath123's Comments + Replies

Perplexity depends on the vocabulary and is sensitive to preprocessing which could skew the results presented here. This is a common problem. See the following reference:

Unigram-Normalized Perplexity as a Language Model Performance Measure with Different Vocabulary Sizes Jihyeon Roha, Sang-Hoon Ohb, Soo-Young Lee, 2020

2Matthew Barnett
Thanks! That's really interesting. I'll check it out.