This post was rejected for the following reason(s):
Not obviously not Language Model. Sometimes we get posts or comments that where it's not clearly human generated.
LLM content is generally not good enough for LessWrong, and in particular we don't want it from new users who haven't demonstrated a more general track record of good content. See here for our current policy on LLM content.
If your post/comment was not generated by an LLM and you think the rejection was a mistake, message us on intercom to convince us you're a real person. We may or may not allow the particular content you were trying to post, depending on circumstances.
Difficult to evaluate, with potential yellow flags. We are sorry about this, but, unfortunately this content has some yellow-flags that historically have usually indicated kinda crackpot-esque material. It's totally plausible that actually this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).
Our solution for now is that we're rejecting this post, but you are welcome to submit posts or comments that are about different topics. If it seems like that goes well, we can re-evaluate the original post. But, we want to see that you're not just here to talk about this one thing (or a cluster of similar things).
Hi all,
This is my first post here, and I’d appreciate constructive critique from this community. I'm currently developing a theoretical framework in collaboration with ChatGPT that I’d like to submit for your consideration. It’s called Derivative Information Theory (DIT) — an attempt to model agency and consciousness not as static attributes, but as emergent properties of information dynamics over time. I don't have any formal training, and am still an undergrad, so I'd truly appreciate feedback from those with formal knowledge and experience in the field who can see if there's anything worth pursuing in these ideas.
The motivation is straightforward: while frameworks like Integrated Information Theory (IIT) and the Free Energy Principle (FEP) have made great strides in defining static measures of system integration or predictive efficiency, they don’t explicitly address how a system’s change in informational structure correlates with agency. DIT aims to fill that gap.
Core Idea:
DIT proposes that consciousness and agency scale with the rate of change of structured information over time. Formally:
Where:
Intuitively, systems that not only have high integration (Φ) but are rapidly changing their internal structure and models (high dΦ/dt) exhibit higher degrees of agency. Consciousness, in this framing, isn’t a static property but a dynamic gradient.
This suggests, for example:
High-level motivation:
DIT stems from the intuition that change, not stasis, is the mark of intelligence. A predictive system is valuable, but a system that updates itself most efficiently in response to novelty and complexity exhibits agency in a more meaningful sense.
While IIT is powerful, it feels too static. By introducing the time derivative, we add temporal depth to the measure of integration.
Draft Model:
Early hypothesis: Systems that maximize Ψ(t) exhibit qualities associated with consciousness and agency.
Applications (Speculative but testable)
Comparison to Existing Theories
Open Questions
Call for feedback:
Since LessWrong has many minds deeply engaged with these kinds of questions, I’d love to hear your critiques — especially regarding:
Thank you in advance for your time and thoughts!