This post was rejected for the following reason(s):

  • Not obviously not Language Model. Sometimes we get posts or comments that where it's not clearly human generated. 

    LLM content is generally not good enough for LessWrong, and in particular we don't want it from new users who haven't demonstrated a more general track record of good content.  See here for our current policy on LLM content. 

    If your post/comment was not generated by an LLM and you think the rejection was a mistake, message us on intercom to convince us you're a real person. We may or may not allow the particular content you were trying to post, depending on circumstances.

  • Difficult to evaluate, with potential yellow flags. We are sorry about this, but, unfortunately this content has some yellow-flags that historically have usually indicated kinda crackpot-esque material. It's totally plausible that actually this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Our solution for now is that we're rejecting this post, but you are welcome to submit posts or comments that are about different topics. If it seems like that goes well, we can re-evaluate the original post. But, we want to see that you're not just here to talk about this one thing (or a cluster of similar things).

Hi all,

This is my first post here, and I’d appreciate constructive critique from this community. I'm currently developing a theoretical framework in collaboration with ChatGPT that I’d like to submit for your consideration. It’s called Derivative Information Theory (DIT) — an attempt to model agency and consciousness not as static attributes, but as emergent properties of information dynamics over time. I don't have any formal training, and am still an undergrad, so I'd truly appreciate feedback from those with formal knowledge and experience in the field who can see if there's anything worth pursuing in these ideas.

The motivation is straightforward: while frameworks like Integrated Information Theory (IIT) and the Free Energy Principle (FEP) have made great strides in defining static measures of system integration or predictive efficiency, they don’t explicitly address how a system’s change in informational structure correlates with agency. DIT aims to fill that gap.

Core Idea:

DIT proposes that consciousness and agency scale with the rate of change of structured information over time. Formally:

Ψ(t) = dΦ/dt

Where:

  • Φ(t) is the system’s integrated information at time t (borrowing from IIT),
  • Ψ(t) is the proposed measure of dynamic agency or “conscious intensity.”

Intuitively, systems that not only have high integration (Φ) but are rapidly changing their internal structure and models (high dΦ/dt) exhibit higher degrees of agency. Consciousness, in this framing, isn’t a static property but a dynamic gradient.

This suggests, for example:

  • Brains in altered states (psychedelics, REM sleep, peak cognitive activity) might show spikes in Ψ(t).
  • Intelligent artificial systems could be trained to optimize Ψ(t) as an objective function.
  • Ψ(t) might correlate with subjective experiences of "flow" or rapid cognitive restructuring.

High-level motivation:

DIT stems from the intuition that change, not stasis, is the mark of intelligence. A predictive system is valuable, but a system that updates itself most efficiently in response to novelty and complexity exhibits agency in a more meaningful sense.

While IIT is powerful, it feels too static. By introducing the time derivative, we add temporal depth to the measure of integration.

Draft Model:

  1. Structured Information (I(t)) — The meaningful, non-random information in the system at time t.
  2. Integrated Information (Φ(t)) — The degree to which system elements causally constrain each other (IIT measure).
  3. Dynamic Agency Measure (Ψ(t)) — The rate of change of Φ(t).

Early hypothesis: Systems that maximize Ψ(t) exhibit qualities associated with consciousness and agency.

Applications (Speculative but testable)

  • AI development: Train models to maximize Ψ(t) rather than just reward prediction accuracy. Could this induce emergent agency-like behaviors?
  • Neuroscience & BCIs: Use neuroimaging to track Ψ(t) as a metric of cognitive engagement or recovery.
  • Psychopharmacology: Investigate correlation between Ψ(t) and altered states of consciousness under psychedelics.
  • Evolutionary biology: Model natural selection as a process favoring Ψ(t)-maximizing organisms, possibly offering a unifying explanatory principle.

Comparison to Existing Theories

FrameworkStatic vs. DynamicFocusDIT Integration
Integrated Information Theory (IIT)StaticSystem Integration (Φ)dΦ/dt for temporal dynamism
Free Energy Principle (FEP)DynamicError minimizationΨ(t) as adaptive restructuring
Predictive ProcessingDynamicPrediction hierarchyΨ(t) as acceleration of model updates

Open Questions

  • How can we rigorously define and measure structured information across domains?
  • Can we experimentally operationalize Ψ(t) in living brains or advanced AI systems?
  • What is the thermodynamic cost of maximizing Ψ(t)? Does it map to entropy flux?
  • How does Ψ(t) relate to phenomenological reports of experience intensity?

Call for feedback:

Since LessWrong has many minds deeply engaged with these kinds of questions, I’d love to hear your critiques — especially regarding:

  • Formal weaknesses in the mathematical framing
  • Connections to existing complexity science models I may have missed
  • Pathways toward empirical testing
  • Risks of conflating noise with meaningful change (false positives in high dI/dt systems)

Thank you in advance for your time and thoughts!

New Comment