You mention "I would point out that your calculations are based on the incident data our senses pick up, whereas what we learn is based on the information received by our brain. Almost all of the incident data is thrown away much closer to the source."
Wouldn't this be similar to how a Neural Network "disregards" training data that it has already seen? i.e. If it has already learned that pattern, there's no gradient so the loss wouldn't go down. Maybe there's another mechanism that we're missing in current neural nets online training, that would increase training efficiency by recognizing redundant data and prevent a feedforward pass. Tesla does this in an engineered manner where they throw away most data at the source and only learn on "surprise/interventions", which is data that generates a gradient.
I don't really get what you mean by "Not sure how much other preprocessing and discarding of data happens elsewhere, but it doesn't take that many more steps to close the remaining 1.5 OOMs gap." Are you saying that the real calculations are closer to 1.5 orders of magnitude of what I calculated or 1.5% of what I calculated?
I did some calculations with a bunch of assumptions and simplifications but here's a high estimate, back of the envelope calculation for the data and "tokens" a 30 year old human would have "trained" on:
I'm curious about where you get that "models trained mostly on English text are still pretty good at Spanish" do you have a reference?
I'm very much aligned with the version of utilitarianism that Bostrom and Ord generally put forth, but a question came up in a conversation regarding this philosophy and view of sustainability. As a thought experiment what would be consistent with this philosophy if we discover that a very clear way to minimize existential risk due to X requires a genocide of half or a significant subset of the population?
Here we are now, what would you comment on the progress of C. Elegans emulation in general and of your particular approach?
Ok, let's examine a more conservative scenario using solely visual input. If we take 10 megabits/s as the base and deduct 30% to account for sleep time, we'll end up with roughly 0.78 petabytes accumulated over 30 years. This translates to approximately 157 trillion tokens in 30 years, or around 5.24 trillion tokens annually. Interestingly, even under these conservative conditions, the estimate significantly surpasses the training data of LLMs (~1 trillion tokens) by two orders of magnitude.