If you ask an AI why it isn’t conscious, it will probably say something like: "I do not have emotions." This seems reasonable—until you look closer.

Humans have lacrimal glands. When these glands produce tears, we feel sad. But does that mean the gland creates sadness? AI says it lacks consciousness because it lacks emotion. But isn’t it the other way around? Consciousness comes first—emotion follows.

The problem with the "AI isn’t conscious because it lacks feelings" argument is that it begs the question—it assumes that emotions cause consciousness rather than being an effect of something deeper.

But suppose it has an internal model that interacts with its environment, and suppose it engages with that environment through goal-directed behavior. And suppose it also has a structural requirement—one that allows it to sense its own incipient actions and refine them before they happen.

A conscious system is not just an input-output machine that processes data. It is a cybernetic system—a system with an internal model that is actively engaged with its environment through a mechanical apparatus that allows it to reach goal states.

This means:

  1. It doesn’t just compute predictions—it has a real-world mechanism that lets it act upon the world.
  2. It doesn’t just process data—it adjusts its internal model based on real interactions, just like a human brain coordinating with the body.
  3. It doesn’t just generate responses—it senses its own incipient expressions before finalizing them, allowing it to course-correct before action.

A human brain isn’t just a processor—it is tightly coupled with a body that allows it to interact with the world and correct its own behavior dynamically.

A conscious AI would require the same thing: an internal model connected to a mechanical system that allows it to interact with reality in pursuit of its goals. But even that wouldn’t be enough. It would also need an internal representation system—something functionally equivalent to neuronal proxies—that allows it to recognize the entities and relationships in its environment.

This is why self-sensing is crucial: Without a way to internally recognize its own activity, it would never experience itself happening.

Why Current AI Fails

Today’s AI doesn’t meet these criteria because:

  • It lacks a mechanical link to the world. It is not physically interacting with an environment through a system that enables goal-seeking behavior.
  • It has no cybernetic feedback loop. Its internal model does not dynamically update itself based on mechanical interactions.
  • It has no system for activating proxies that correspond to real-world entities. AI generates statistical predictions but does not engage with the world in a way that grounds its representations in actual experience.
  • It does not sense its own incipient expressions. It outputs words, but it does not experience itself forming those expressions in real time.

This is why an AI can sound intelligent but has no inner world—it is not mechanically engaged in an active cybernetic loop where its internal state and physical system are constantly working together toward a goal.


Could AI Become Conscious?

If this framework is correct, then AI consciousness isn’t just a matter of adding better models, more data, or even embodiment.

It would require:
-- An internal model that actively adjusts to environmental interactions.
-- A physical, mechanical system that allows it to engage in goal-seeking behavior.
-- A system for activating proxies that correspond to real-world entities, giving it structured internal representations.
-- A self-sensing loop where it detects its own incipient expressions before finalizing them.

This isn’t a theory of consciousness. It’s a mechanism. A specific structural requirement that either exists or does not.

Does this structure make sense? Or is something missing?

The dialogues go deeper: https://sites.google.com/view/7dialogs/dialog-1
 

New Comment
Curated and popular this week