Old is new again. Figure 1(a) is cascade control (the higher level operates by means of the lower) and 1(b) is the subsumption architecture (the higher level operates instead of the lower).
I don't know why they call 1(b) "open loop". The loop looks closed to me.
I wonder if the authors have considered using arbitrarily many layers of control.
Did anything about this paper stand out to you? It doesn't strike me as anything revolutionary on its own. Interesting component, perhaps. Does it change your expectations about what safety approaches work? Is it mainly capabilities news?
It certainly is an interesting component of a research tree that will be key to making anything seriously scale, though.
No, just a piece of the puzzle of a more salient understanding of AI self-control that I want to outline, which should integrate ML, cognitive science, theory of consciousness, control theory/resilience theory, and dynamical systems theory/stability theory.
Only this sort of understanding could make the discussion of oracle AI vs. agent AI agendas really substantiated, IMO.
makes sense. Are you familiar with Structured State Spaces and followups?
A preprint is published by Devdhar Patel, Joshua Russell, Francesca Walsh, Tauhidur Rahman, Terrance Sejnowski, and Hava Siegelmann in December 2022.
Abstract:
Conclusion: