Hi everyone,

I’m exploring a compact theoretical framework (ICT Model) that attempts to link information dynamics, temporal structure, and conscious processes using minimal assumptions.

The central idea is that the rate of informational change (dI/dT) is a meaningful physical quantity.

In this view:

consciousness ∝ local dI/dT,

matter = stabilized information I_fixed

energy = interaction between changing and fixed informational states

reality’s “levels” emerge from stable mappings between I_fixed and dI/dT.

computation / agency = organized flows of information updates

The motivation is to provide a simple shared language connecting information theory, physics, phenomenology, and models of intelligent systems — including both biological and artificial agents.

A full preprint (with equations, phenomenology and testable criteria) is here:

For discussion, please join us here:

https://www.academia.edu/s/8924eff666#comment_1478583

Preprint: https://doi.org/10.5281/zenodo.17584782

Feedback from people working in theoretical physics, computational neuroscience, or cognitive science would be very welcome.

  • AnarchistArtificer@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    Useful context: I am a biochemist with a passing interest in neuroscience (plus some friends who work in neuroscience research).

    A brief minor point is that you should consider uploading the preprint as a pdf instead, as .docx can cause formatting errors if people aren’t using the same word processor as you. Personally, I saw some formatting issues related to this (though nothing too serious).

    Onto the content of your work, something I think your paper would benefit from is linking to established research throughout. Academia’s insistence on good citations throughout can feel like it’s mostly just gatekeeping, but it’s pretty valuable for demonstrating that you’re aware of the existing research in the area. This is especially important for research in a topic like this tends to attract a lot of cranks (my friends tell me that they fairly frequently get slightly unhinged emails from people who are adamant that they have solved the theory of consciousness). Citations throughout the body of your research makes it clear what points are your own, and what is the established research.

    Making it clear what you’re drawing on is especially important for interdisciplinary research like this, because it helps people who know one part of things really well, but don’t know much about the others. For example, although I am familiar with Friston’s paper, I don’t know what has happened in the field since then. I also know some information theory stuff, but not much. Citations are way of implicitly saying “if you’re not clear on where we’re getting this particular thing from, you can go read more here”.

    For example, if you have a bit that’s made up of 2 statements:

    • (1): Something that’s either explicitly stated in Friston’s paper, or is a straightforwardly clear consequence of something explicitly stated
    • (2): Something that your analysis is adding to Friston’s as a novel insight or angle

    Then you can make statement 2 go down far easier if that first statement. I use Friston in this example both because I am familiar with the work, but also because I know that that paper was somewhat controversial in some of its assumptions or conclusions. Making it clear what points are new ones you’re making vs. established stuff that’s already been thoroughly discussed in its field can act sort of like a firebreak against criticism, where you can have the best of both worlds of being able to build on top of existing research while also saying “hey, if you have beef with that original take, go take it up with them, not us”. It also makes it easier for someone to know what’s relevant to them: a neuroscientist studying consciousness who doesn’t vibe with Friston’s approach would not have much to gain from your paper, for instance.

    It’s also useful to do some amount of summarising the research you’re building on, because this helps to situate your research. What’s neuroscience’s response to Friston’s paper? Has there been much research building upon it? I know there have been criticisms against it, and that can also be a valid angle to cover, especially if your work helps seal up some holes in that original research (or makes the theory more useful such that it’s easier to overlook the few holes). My understanding is that the neuroscientific answer to “what even is consciousness?” is that we still don’t know, and that there are many competing theories and frameworks. You don’t need to cover all of those, but you do need to justify why you’re building upon this particular approach.

    In this case specifically, I suspect that the reason for building upon Friston is because part of the appeal of his work is that it allows for this kind of mathsy approach to things. Because of this, I would expect to see at least some discussion of some of the critiques of the free energy principle as applied to neuroscience, namely that:

    • The “Bayesian brain” has been argued as being an oversimplification
    • Some argue that the application of physical principles to biological systems in this manner is unjustified (this is linked to the oversimplification charge)
    • Maths based models like this are hard to empirically test.

    Linked to the empirical testing, when I read the phrase “yielding testable implications for cognitive neuroscience”, I skipped ahead because I was intrigued to see what testable things you were suggesting, but I was disappointed to not see something more concrete on the neuroscience side. Although you state

    “The values of dI/dT can be empirically correlated with neuro-metabolic and cognitive markers — for example, the rate of neural integration, changes in neural network entropy, or the energetic cost of predictive error.”

    that wasn’t much to go on for learning about current methods used to measure these things. Like I say, I’m very much not a neuroscientist, just someone with an interest in the topic, which is why I was interested to see how you proposed to link this to empirical data.

    I know you go more into depth on some parts of this in section 8, but I had my concerns there too. For instance, in section 8.1, I am doubtful of whether varying the temporal rate of novelty as you describe would be able to cause metabolic changes that would be detectable using the experimental methods you propose. Aren’t the energy changes we’re talking about super small? I’d also expect that for a simple visual input, there wouldn’t necessarily be much metabolic impact if the brain were able to make use of prior learning involving visual processing.

    I hope this feedback is useful, and hopefully not too demoralising. I think your work looks super interesting and the last thing I want to do is gatekeep people from participating in research. I know a few independent researchers, and indeed, it looks like I might end up on that path myself, so God knows I need to believe that doing independent research that’s taken seriously is possible. Unfortunately, to make one’s research acceptable to the academic community requires jumping through a bunch of hoops like following good citation practice. Some of these requirements are a bit bullshit and gatekeepy, but a lot of them are an essential part of how the research community has learned to interface with the impossible deluge of new work they’re expected to keep up to date on. Interdisciplinary research makes it especially difficult to situate one’s work in the wider context of things. I like your idea though, and think it’s worth developing.

    • DmitriiBaturo@beehaw.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      6 hours ago

      Hi! Thank you very much for such a detailed and thoughtful review — I really appreciate the time and attention you gave it. Your feedback is exactly the kind of constructive perspective that helps strengthen interdisciplinary work like this.

      Let me address your main points:

      1 On the Free Energy Principle. You are absolutely right that part of the neuroscience community sees FEP as overly broad and sometimes unnecessarily complex. In ICT, the model does not rely on FEP as a foundation — we use it only as an illustrative special case where dI/dT can be interpreted in terms of prediction error and the energetic cost of updating internal states. In other words, FEP is not a basis for ICT, but rather a local projection of a more general temporal structure. We will make this explicit in ICT 2.0 to avoid any confusion.

      2 Citations and connection to existing research. You’re right: in several places the preprint assumes familiarity with background work — entropy metrics, temporal integration, information-theoretic models, etc. The next version will include a more structured “background and context” section with clear references throughout. Your comment here is very helpful and will definitely make the next edition stronger and more accessible.

      3 Empirical testability and neuroscientific methods. Thank you for highlighting this. Section 8 already outlines specific paradigms (oddball / novelty detection, LZ-complexity, entropy rate, γ-coupling, prediction-error energetics, etc.), and I agree that for readers outside neuro-metrics these connections should be made more explicit. ICT 2.0 will expand this section with clearer explanations of applicable methods, their benefits, and limitations as used in practice.

      4 On quantitative scales of dI/dT and metabolic effects. A key clarification is this: the magnitude of measurable effects varies greatly depending on the type of cognitive process. Simple, fast sensory events do produce very small changes — but the paradigms we propose are specifically chosen to target conditions where dI/dT varies much more strongly, such as:

      disrupted temporal sequences,

      violated expectations over time,

      high-level predictive mismatch,

      integration over multi-step patterns.

      These are precisely the contexts where entropy, γ-coherence, and prediction-error timing produce the strongest and most reliable signals. We are formalizing these estimates for ICT 2.0 and will include the corresponding references.

      You are right that basic stimuli produce minimal metabolic signatures — but the ICT experiments are deliberately focused on scenarios where the temporal structure is perturbed more significantly, and where the relevant methods are most sensitive.

      And most importantly — thank you for your kind words. I’m an independent researcher, and your tone and careful attention truly mean a lot. The aim of ICT is not to bypass academic standards, but to offer something genuinely testable and conceptually consistent. Your comments help sharpen that aim.

      Thanks again for your thoughtful, precise, and generous critique.

      P.S. And yes… you’re not the first one to mention the formatting issue on Zenodo. We’ll fix that as well, but only in the next version, because Zenodo doesn’t allow replacing a file without deleting the entire record and creating a new version, which would reset the metadata and links. So I’m unable to change the extension there at the moment. If you’d like to download the paper as a PDF, you can do so via Academia: https://www.academia.edu/s/8924eff666