Zechen Zhang

Wiki Contributions

Comments

Sorted by

This is an interesting point--when we did our causality studies across layers, we also found that the board state features in the middle layers are mostly used causally--not the deep layers. However, the probe accuracy does increase with depth.

I don't know how this translates to the fact that SAEs also find more of these features in the middle layers. Like, the "natural features" in some sense in the last few layers found by the SAEs do not have to contain much information about the board state but just partial information to make the decision. 

Continual learning is an alternative, I would argue, to solve long term planning and agency rather than necessary. Augmented LLMs with long term memory retrieval can do long term planning assuming the model is already powerful enough. Also agency just emerges from the simulator naturally.

I'm not convinced about continual learning as even the most likely path to AGI.