Previous Work on Recreating Neural Network Input from Intermediate Layer Activations
Recently I've been experimenting with recreating a neural network's input layer from intermediate layer activations. The possibility has implications for interpretability. For example, if certain neurons are activated on certain input, you know those neurons are 'about' that type of input. My question is: Does anyone know of prior work/research...
It's not a take that I've thought about deeply, but could the evidence be explained by a technological advancement: the ability to hop between diverging universes?
-
-
-
-
... (read more)It would explain why we don't see aliens; they discover the technology, and that empty parallel worlds are closer in terms of energy expenditure.
It could also explain why the interlopers don't bother us much; they are scouting for uninhabited parallel earths with easily-accessible resources, and skipping those with a population. The only ones we see are the ones incompetent or unlucky enough to crash.
It would explain why aliens aren't ridiculously outclassing us technologically. They don't have to solve interstellar travel before they start hopping.
It would provide an