Review

An interesting comment from Max Tegmark in Lex Friedman podcast on an idea attributed to Giulio Tononi (I haven't found publication source for this idea); that consciousness and perhaps goal-formation is a function of recurrence.  Recurrence being the feeding back of output to input in a once-through neural net such as the cortical columns found in our brains.

The idea seems (to me) to be that goal setting agency and intent come about as a function of feeding back conceptions and ideas developed from passes-through of our cortical columns - similar to a single prompt-output cycle in an LLMs, that are filtered for usefulness against our goals.  But that these basic pass-throughs are innately unmotivated, and perhaps near-deterministic.  The outputs from a prompt are then a point where you can safely get a snapshot of thinking process without fear of deceit.

If correct in conception this may offer a useful means for enforcement of alignment - by editorial control of that feedback-to-prompt, pruning out anything (for example) that isn't in accordance with human-controlled preferences and tasking.  Maybe we can sculpt safe mono-mania of the design-me-a-better-x, or solve-problem-y type, or more generally does this align with human preferences (Along the lines of Asimov's 4-Laws conception),  while still accessing the creative potential and greater scope of understanding of an AGI or even ASI.  

Review

-2

New Comment
6 comments, sorted by Click to highlight new comments since:
[-]Shmi5-2

If you mean the Tononi pet model of consciousness https://en.wikipedia.org/wiki/Integrated_information_theory, it has been debunked a long time ago, see https://scottaaronson.blog/?p=1799. 

Oh I don't know about debunked. certainly shown to be an insufficient picture, yes. but research continues to point towards critical brain hypothesis being very important, which relates informationally to IIT. eg a random paper on that research subgraph https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8851554/

I mean, it could be a necessary condition, but by no means sufficient.

yeah okay fair enough.

This seems like a solid attempt at understanding the concepts, but I think the true complexity of these concepts is much greater than you're anticipating. I simply dropped your post into GPT4's input - this isn't that great of a summary tbh but it's a start:

The post discusses the idea of consciousness and goal-formation as a function of recurrence in neural networks, attributing the concept to Giulio Tononi. It also raises the possibility of using this idea to enforce alignment in artificial intelligence systems. Here are some comments on the points made in the post:

Recurrence in neural networks: Recurrent neural networks (RNNs) are a type of neural network architecture that allows for the feeding back of output to input, which can help model temporal dependencies and maintain a memory of previous inputs. In the context of human brains, cortical columns may exhibit recurrent connections, which could potentially be related to the formation of goals and consciousness. However, the exact relationship between recurrence and consciousness is not yet well-established, and more research is needed to explore this idea.

Goal-setting agency and intent: The post suggests that goal-setting agency and intent may arise from the feedback of conceptions and ideas through cortical columns, with unmotivated and near-deterministic pass-throughs serving as a snapshot of the thinking process. While this is an interesting idea, it is important to note that consciousness and goal-formation are complex phenomena that likely involve multiple interacting processes in the brain. Further research is needed to determine the extent to which recurrence might contribute to these processes.

Alignment enforcement through editorial control: The post proposes that controlling the feedback-to-prompt in an artificial intelligence system could help enforce alignment with human-controlled preferences and tasking. This is an intriguing idea, as it suggests that by pruning or modifying the outputs of a system, we could potentially guide its behavior towards desired goals. However, implementing such an editorial control mechanism in practice could be challenging, and it may not be sufficient on its own to ensure complete alignment with human values.

Relationship to Asimov's Laws: The idea of enforcing alignment through editorial control of feedback in an AI system is reminiscent of Asimov's Laws of Robotics, which outline basic principles for robots to follow in order to ensure they behave safely and in accordance with human preferences. However, it is important to recognize that enforcing alignment in AI systems is a complex problem that may require more nuanced approaches than simply implementing a set of rules or guidelines.

In summary, the post presents an interesting idea about the role of recurrence in consciousness and goal-formation, and the potential application of this concept to enforce alignment in AI systems. While the idea has some merits, further research is needed to better understand the relationship between recurrence and consciousness, and to explore the practical challenges of implementing alignment enforcement mechanisms in artificial intelligence systems.

Recurrence is a requirement of Turing completeness, so if you assume that consciousness can be modeled as some kind of universal Turing machine then it's required.