If Deep Learning people suddenly starting working hard on models with dynamic architectures who self-modify (i.e. a network outputs its own weight and architecture update for the next time-step) and they *don't* see large improvements in task performance, I would take that as evidence against AGI going FOOM.
If Deep Learning people suddenly starting working hard on models with dynamic architectures who self-modify (i.e. a network outputs its own weight and architecture update for the next time-step) and they *don't* see large improvements in task performance, I would take that as evidence against AGI going FOOM.