But Where is the Memory?! A Discussion of Training Gene-regulatory Networks and its Implications

Published by


One of the key implications of my TAME framework is that intelligence can be found in surprising places: we are not yet good at predicting it, and need to do experimental work to ask: where do various unconventional systems (especially cells, tissues, etc.) land on this spectrum of persuadability:

One useful and interesting thing about a Diverse Intelligence approach is that it encourages us to test out tools and concepts from other disciplines, especially those focused on understanding behavior and cognition, to look for improved prediction, control, and invention of various unconventional systems. For example, right now molecular medicine treats gene regulatory networks and protein pathways as low-level machines, to be micromanaged by forcing specific signaling states via drugs targeting the nodes. But what if they had more advanced computational capabilities, and could be managed by taking advantage of their ability to change future behavior in light of past experience? The implications of this are discussed here and here.

So, what proto-cognitive competencies might gene regulatory networks (GRNs) and other kinds of chemical pathways have? Initially, this seems unlikely; after all, they are modeled as simple, deterministic, systems in which a few nodes turn each other on and off (or up and down) according to clear rules. There is no room for magic and no hidden variables or mechanisms; surely they are not capable of anything like learning? But, in keeping with TAME (and its assertion that any assignment of a level of intelligence to a system is largely a statement about our own discernment), we decided to test that assumption: treat the pathway as if it was an agent and try the various tools of behavioral testing. Some of the nodes are treated as inputs (sensory pathways) and others as outputs (behavioral effectors), and we can test things like habituation, sensitization, anticipation, association, etc.

When we did, we found several different kinds of learning, in both Boolean and continuous pathway models. Interestingly, these memory types (including Pavlovian conditioning) occur much more frequently in real biological models than in random ones. One key feature though is that there is no synaptic plasticity here: the network is totally fixed. The connections between the nodes (its topology) does not change, nor does the strength of the weights (the rules governing how the nodes control each other) – the physical structure of the network is totally fixed during the training and subsequent testing process. What does change are the values of all the node activities at any specific time, as signals propagate through the network (its physiology), and it gets chased into new attractors that capture a different likely path for specific inputs.

This raises a key question. Given that the structure of the network doesn’t change during training, where is the memory? Where is the experience of the trained network stored, given that there is no explicit memory medium to store the engrams and the structure of the network does not change in light of its experience? Or asked another way, if the structure doesn’t change, then would it even be possible to distinguish a naive network (pre-training) from a trained one? What kind of observations would you have to make, to be able to look at a network and read its mind – know what it had learned in its prior history, given that the structure of the network is fixed and not different before and after? What else would you need to do, and can you read its content without interacting with it (stimulating it to see what happens)? This is a parallel concept to the idea of neural decoding: can we look at a brain (scanning it however needed) and decode the cognitive content of the individual? We can use these pathway models as a simple toy model to explore the notion of memory, and the difference between accessing engrams in first person (the being in whose cognitive medium the memories are formed) vs. in third person (external attempts to decode another being’s mind from the outside).

It may not be possible to know what memories a trained pathway contains from pure observation, but you can test the system with stimuli and get a clear answer based on its behavioral outcome. So, what kinds of systems cannot be decoded with pure observation – which ones need to be functionally interacted with, in order for an observer to be able to understand what information they store? Here is a video of Richard Watson, Chris Fields, and I discussing this question; they had many interesting ideas:

In the end, here’s what I think is going on. Some systems cannot be decoded (understood, controlled) by passive inspection, but only by interacting with them functionally – by entering a sort of entangled dance together where you stimulate/signal to them and see what happens. In a way, it’s a kind of a Heisenberg uncertainty-like effect, where you can’t get all the information you need without disturbing (changing) the system. The surprising thing is that this property shows up very early on the spectrum of agency – in very simple systems; you don’t have to have a complex brain to have an internal perspective: by the time you have chemical pathways, this is already in place. Maybe we need a parameter – a number to express the degree to which a system needs to be interacted with to truly understand it. GPT-4 made the following suggestions for naming this quantity:

  1. Interospecivity:
    • Root: Derived from ‘Intero-‘ (inside or internal) and ‘-specivity’ (related to observation).
    • Meaning: The degree to which one must observe from within or via interaction to understand an agent’s thoughts.
  2. Dialognosia:
    • Root: From ‘Dialog-‘ (converse or interact) and ‘-gnosia’ (knowledge).
    • Meaning: Knowledge of an agent’s inner thoughts that can only be attained through dialogue or interaction.
  3. Actoceptive Index:
    • Root: ‘Acto-‘ (derived from action or activity) and ‘ceptive’ (related to perception).
    • Meaning: A measure indicating the extent to which comprehension of an agent’s thoughts requires action-based perception.
  4. Interactendency:
    • Root: From ‘Interact-‘ (between or mutual action) and ‘-endency’ (a disposition or inclination towards).
    • Meaning: The inclination or tendency for an agent’s inner cognition to be understood primarily through interaction.
  5. Engagivity Coefficient:
    • Root: ‘Engage’ (to interact or involve) and ‘ivity’ (a state or condition).
    • Meaning: A coefficient representing the state in which understanding an agent’s cognition is dependent upon engagement or interaction.

I haven’t picked one yet. Meanwhile a few miscellaneous thoughts:

  • The whole “you can’t tell what memories it holds without interacting with it” aspect is very compatible with the more general point of TAME with respect to agency, intelligence, and overall cognitive level – that these things can’t be judged from purely observational data. That is, by watching things happening you can’t tell how much competency is under the hood – you have to do perturbative experiments where you confront it with novel challenges to test hypotheses about what it’s measuring, remembering, learning, optimizing, etc.
  • What if the agent gives you a written note about what they are thinking – isn’t that a case when you can access their memories with pure (noninvasive) reads? Yes, and this is a metric of the system wanting to communicate – a narrow-bandwidth channel that works like Huygens’ Shelf to help discordant oscillators find a common resonance. Perhaps high-level agents that want to make it easier to share thoughts (states) with each other can use communication through a narrow medium with low privacy coefficient (one that can easily be read out by 3rd person observations) to help each other solve the privacy problem. Note that in doing this, they are effectively making it easier to be manipulated/interacted with, as this functional interaction is what enables fidelity in reading a system’s memory’s meaning. Thus, vulnerability is a necessary part of efficient spreading of content from mind to mind.
  • As pointed out by Wesley Clawson, there are gradations of “invasiveness” for the interaction (as distinct from purely observational read operations). Thus, the observer provides simple stimuli, or changes setpoints, or gives inputs calculated to radically change the inner structure of the agent (“the thought that breaks the thinker” in its most extreme form) – different degrees of interaction to improve understanding of the system at hand.
  • The further right you go on the Spectrum of Persuadability, the less those systems are readable purely by non-destructive reads (observation, not interaction). As this property increases, so does the aspect that whatever you say about this system you are really saying about the dyad of “you+system”, not about the system itself. This is consistent with TAME and the notion of polycomputing, where the observer’s perspective and their own cognitive competencies are a critical part of a view of the system (and there is no 1 objective, correct view – it’s observer-relative).
  • As Thomas Varley points out, issues of dynamical degeneracy (causal emergence) and time-symmetric chaos, as seen in the work of Hoel, Tononi, Olaf Sporns, and others, give additional reasons why the meaning inherent in such physiological engrams cannot be read out by external observations. Additional reading on all this, courtesy of Thomas:
  • So then, why is it pretty easy to understand (even if not perfectly) the content of your own mind and the engrams of memories you formed? It’s precisely because the relationship with your own mind is constant functional intervention. Via active inference and other strategies, you (the emergent virtual governor) are constantly intervening in your own cognitive medium (which is harder for others to do from the outside). The internal perspective is privileged for this reason – because it’s an active, functional one (not a pure observational system of read-decode).
  • The above is consistent with the increasing realization that recalling your own memories is also a perturbative process – memories are not read out non-destructively, but are actually modified by recall. We revise our memories by recalling them.
  • So where is the memory in such a system? It’s in the relationship between it and the observer.
  • One way to recover the memories is to interact with a copy or a model of the system, instead of the system itself. And of course this is what we (as biological systems, not just brains) do all the time, because we have internal self-models with which to do simulated experiments to know what we think.
  • The level of observation is crucial. Even a traditional, easily-recognizable memory element (e.g., flip flop) looks strange if examined at the level of the atomic particles: none of them have been marked or changed by storing data in the register – they are still factory-state particles. So where is the memory then, if you can’t scratch it onto the copper atoms? It’s present in the higher-level configuration of the system. Memories are always present at one level, and inscrutable at lower levels of observation (bringing us back to the relationship between a system, its own private memories, and an external observer trying to decode them, and needing to pick the right level).

Even something as simple as a small gene regulatory network model has interesting implications for deep questions about memory and the selves to whom memories belong.

54 responses to “But Where is the Memory?! A Discussion of Training Gene-regulatory Networks and its Implications”

  1. Tiffany J Isbell Avatar
    Tiffany J Isbell

    Wonderful, thought provoking discussion! All of the wheels are turning brilliantly.

  2. Ralph Mayer Avatar

    The reason you followed me on Twitter has nothing to do with my theory, or this would be a different conversation. You just liked my answers. I’m not a biologist, more of a mathematical physics guy. I found what you’re looking for, and it’s a genetic matrix that you’re already working with when you regrow heads and legs. Geneticists find my theory more interesting than anyone. All the particles in the Standard Model showed up in this theory spawned in Mendelian fashion. Mendel’s recessiveness shows up in particle theory as interaction with environment, then going back for regeneration or modification, it’s how your information flow works. I’m applying to work with you very soon. I write about autonomy and self assembly on LinkedIn, that’s the website link.

  3. Pamela Lyon Avatar
    Pamela Lyon

    Lots to chew on here. Love the careful way you’re working out the implications of TAME. Do hope you’re working on a perspective piece pulling together your thoughts on memory based on these musings. (Hint hint.)

    Engram as configuration/relationship between elements in the system is, of course, exactly right. Has to be. GRNs are a perfect example that you don’t need neurons for that. So true: “…you don’t have to have a complex brain to have an internal perspective: by the time you have chemical pathways, this is already in place.”

    However, you should be aware of (2) essential (highly cited) references I’ve been flogging for years in relation to bacteria. We don’t know how memory is maintained in these cases, but we do know they are dissociable and reprogrammable.

    Predictive behavior within microbial genetic networks
    I Tagkopoulos, YC Liu, S Tavazoie (2008)
    Science 320 (5881), 1313-1317

    Adaptive prediction of environmental changes by microorganisms
    A Mitchell, GH Romano, B Groisman, A Yona, E Dekel, M Kupiec, … (2009)
    Nature 460 (7252), 220-224

    Your musings also resonate with another item I read this morning about the recent discovery in bats that so-called place cells function not only in relation to location but also to social relationships. ‘Navigating space’ on two seemingly discrete levels but both existentially critical (always the common denominator in biological systems).

    Hence the potency of your delightful observation: “Thus, vulnerability is a necessary part of efficient spreading of content from mind to mind.” To be open to perturbation by not-self is the core of interaction and relationship, as well as the inescapable fact of all life.

    Thank you so much!

    1. Mike Levin Avatar
      Mike Levin

      Brilliant, thanks! Great references. Do you have a source on the bat thing – it’s absolutely perfect for something I was writing about what landscapes look like from the perspective of different agents. Oh and yes, I am indeed working on something on memory. Just behind on everything!

      1. Pamela Lyon Avatar
        Pamela Lyon

        Angelo Forli & Michael M. Yartsev (2023). Hippocampal representation during collective spatial behaviour in bats. Nature volume 621, pages796–803

        Published in August. Great news about memory thing.

    2. Andrea Hiott Avatar

      What a wonderful and helpful reply! Great to see these references and added even more to an already exciting post.

  4. Micah Zoltu Avatar
    Micah Zoltu

    Great article, but I’m quite skeptical of this claim (and therefore the conclusions that arise from it further on in the article):
    > It may not be possible to know what memories a trained pathway contains from pure observation

    I’m a bit of a determinist, so I find it quite difficult to buy into the premise that there is some kind of un-measurable phenomena occurring inside cells (or whatever) that allows them to do interesting things but cannot be explained.

    I could definitely get on board with the claim that our current technology doesn’t allow us to measure these things with sufficient detail, or our current algorithms/processing power isn’t sufficient to allow us to predict outcomes in very complex systems. However, this is quite different and, importantly, leads to quite different conclusions than if one believes that behaviors are truly unpredictable based solely on the inputs and internal states.

    I agree with just about everything else though!

    1. Mike Levin Avatar
      Mike Levin

      Yeah that’s one reason I think all this is interesting: it’s precisely *because* this issue occurs already in deterministic, fully transparent systems. The GRN models we’re talking about here are totally deterministic, and we can see every part of the system – there’s nowhere to hide, no possible new pieces/mechanisms to discover (unlike in cells, where we’re never quite sure). My claim is not that there is something unmeasurable in cells, or that there’s anything that cannot be explained or predicted (although “explain” is not as obvious as it may seem). It’s that even in very simple systems, like the pathways that exist inside of cells, there’s an interesting property that requires us to interact with the system to really understand the meaning of its memory traces. But of course, there are many other such emergent limitations, such as the Halting Problem in Turing machines, Godel limits, deterministic chaos (which makes some even deterministic systems unpredictable), etc. But in this case, it’s quite simple: in our computational system, the network is fixed – it is not changed by the experience at the hardware level (that’s a fact, not a conjecture, since we control the code and that’s how we made the system). This means you can’t tell if a network has been trained or not, by looking at its hardware (without poking it and seeing what it does).

      1. Micah Zoltu Avatar
        Micah Zoltu

        I fully agree that if we just look at the hardware’s *structure* we won’t be able to tell if it is trained or not. However, we can look at the current internal state of the hardware (e.g., where are the electrons) and from that potentially know whether or not it has been trained, and potentially predict how it will behave.

        My guess is that GRNs and single cell memory is similar. There may not be a structural change, but that doesn’t mean there isn’t an internal state change. Epigenetics (e.g., DNA methylation) is a prime example of an intra-cellular state machine with memory and (arguably) intelligence & decision making capacities.

        1. Micah Zoltu Avatar
          Micah Zoltu

          To put it another way, if we only pay attention to where the protons and neutrons are at and we ignore where the electrons are at, we can’t determine the current state or predict the behaviors of the machine. However, once we account for the position of electrons we now can read the state of the system and predict its behaviors.

      2. Vicente Sanchez-Leighton Avatar
        Vicente Sanchez-Leighton

        Well it is interesting, because even DRAM memories are very similar: reading a DRAM memory is destructive, the memory controller has to write the contents back again after reading… And even if you don’t read the memory the memory controller has to regularly “refresh” -eg re-write- the contents because the charge that represents the meory leaks. So DRAM memories are illusions maintained by constant “poking”. There are also unavoidable side-effects of the current DRAM technology, the rowhammer bug, by writing very quickly in some memory cells you can modify the content of other cells…so poking can even make memory stop behaving as a proper memory 😉

        1. Mike Levin Avatar
          Mike Levin

          very cool, I forgot about DRAM, great example.

  5. Andrea Hiott Avatar

    A stimulating and thought-provoking piece. Thank you for sharing this. Would this idea of memory fit with a notion of memory as the assessment of the continuity of regularities from some position of measurement? Memory as assessed habit or inertia. Is habit or inertia still memory if it is not being assessed? A bit like the cat being both alive and dead until Schrödinger’s box is opened. In other words–could programs be understood as habits: to poke something means to activate its habituations, to open the box?
    All questions I asked myself while reading.
    Also, just this morning I was listening to the Brain Inspired Conversation with David Glanzman called Memory all the Way Down (Episode 172) and of course it made me think of your work. Would be interested in your views on Glanzman’s neuroscience… what you think about this particular RNA or nucleus-based memory notion connected to McConnell’s work…I imagine you have already dialogued with Glanzman about this, and wonder if you might be able to point me towards any such discussions that might be out there.

    1. Mike Levin Avatar
      Mike Levin

      Oh yes, David Glanzman and I have talked a lot. I like his experiments very much, he’s definitely on to something. The most amazing part is that you can just sort of squirt the RNA into the brain, no fine-tuning of where or how it’s delivered, and it works. Very different from our brittle memory media in engineering. But there are some deep and general issues about using any material medium (around decoding). I’m working on a longer piece on that, stay tuned.

  6. Tony Budding Avatar
    Tony Budding

    From what I can tell, the appeal of modular intelligence models is in its ability to predict learning patterns. However, the title of this article is But where is the memory? Both the micro set-points of a deterministic system and the macro experience of human memory are discussed. In human memory, we assume something is stored outside of our active awareness, and then somehow brought into this awareness (sometimes deliberately and sometimes autonomously). Perhaps the more important question is what is this awareness and where does it reside? A dead brain has no awareness, so it’s not inherent in the cells themselves.

    Furthermore, as you point out, memories are not fixed, static entities. They’re changed by our recalling them. All intelligence requires a set-point, which is a version of an agenda. We humans have some ability to deliberately change our specific agendas based on our desires and acts of will. Where do these desires and acts of will reside?

    There’s an interesting metaphor with music. We can measure, capture, store and reproduce sound. Sound waves all leave the speakers at the same speed, yet we say some music is fast and some music is slow. In improvisational music such as instrumental jazz, the musicians carry on a nonverbal conversation that attentive listeners can discern. Furthermore, the same sound waves can inspire widely diverging emotional experiences among listeners. In this way, the physical realities of the sounds cannot fully explain the experiences of music.

    There must be something else going on in addition to the movement of air through space. There’s an experiential phenomenon that requires a different approach. Science, of course, is the process of determining the causes of measurable effects, but not all of life is measurable. Awareness, attention, desire, willpower, love, angst, even the simple feeling of hunger are all unmeasurable, to name just a few.

    There are certainly real issues with applying the Scientific Method to immeasurable phenomena, but we have to find something. Just as the drunk will never find his keys under the streetlight (since he dropped them in a dark alley), we can never explain experiential phenomena completely if we restrict our approach only to the the light of the observable and measurable.

    However, once we accept this requirement to treat unmeasurable phenomena distinctly from measurable phenomena, many cause and effect relationships can be understood and utilized effectively. It’s certainly uncomfortable to discuss desire, will, agency, and attention in a scientific forum, but without doing so, we’ll always miss out.

    1. Micah Zoltu Avatar
      Micah Zoltu

      For the music “speed” example, I think this is just a matter of pattern matching at work (something human brains are *really* good at). The “speed” term we apply to music is derived from the larger patterns that our brains detect in the incoming sound waves. I think this is sort of what Michael is getting at, in that there are layers of structure and sometimes the lower layers store the important bits of information, but sometimes it is the higher layers that store the interesting information.

      If you drill down far enough (to individual atoms) it is likely that they don’t really store much interesting information at all (though, it is possible they do in things like spins and orbital states and such). Similarly, if you just look at individual sound waves you won’t get a whole lot of interesting information, just as if you look at individual nucleotide sequences you won’t get very interesting information.

      However, each of those things is part of a larger composition that has more and more complex and interesting information stored in it.

      1. Tony Budding Avatar
        Tony Budding

        Right. This is the appeal of modularity. Systems can accomplish a lot more than individuals, yet the systems are nothing but a collection of coordinated individuals. To understand them, we need to incorporate a multi-view approach. We need the forest view, the trees view, and the molecular view, along with how they relate. Michael’s work on this is extremely impressive.

        There are two examples here of why we need to address unmeasurable phenomena. The first is “where” forest view memories, insights, experiences, cognitions, theories, etc. operate. The second is this systemic coordination of individual elements.

        As you say, individual atoms don’t store a lot of interesting information. Yet, what they do store and contribute is essential to the systemic functionality. The whole truly is greater than the sum of its parts. What I’m saying is there is a reality to the systemic layers (albeit unmeasurable).

        TAME itself is a perfect example of an unmeasurable systemic reality. It’s a model of how intelligence functions that is so real that all of us are taking the time to study and discuss it. We need words to discuss TAME, yet the words we use are not TAME. Because of Michael’s and others’ work, we have access to TAME, yet each of us will form our own unique version of it in our minds.

        Somehow, we have to be able to discuss the conceptual model itself, along with the benefits and challenges of each unique version of it. Michael needs to present the model in the most effective way possible, and we all need to work hard with integrity to construct our own understanding of it. In this way, TAME has both a shared and an individual reality to it.

        So, “where” does TAME exist? This is a conundrum because it does exist. We can’t say it’s nowhere, but it’s certainly not anywhere observable or measurable in three-dimensional space. (FWIW, I’ve been working on questions like this for over 30 years)

        The other conundrum is around the systemic coordination of individual elements. If we jump to the most complex system we know of (human life), we experience huge variations in the efficacy of our systemic coordination. Sometimes things just click and we navigate life seamlessly. Other times, we feel like we’re in quicksand, putting our foot in our mouth, or otherwise just error-prone. Sometimes our bodies are able to fend off diseases, and sometimes they’re overwhelmed and succumb to illness.

        Even simple systems operate with some variability. For example, I’d assume learning rates at each level of a system vary noticeably (please correct me if I’m wrong). Where does this variability come from, and is there any way to influence it?

        One thing we can know is that this coordination is both effortful and fragile. The active coordination of cellular activity is called life, and the loss of that coordination is called death. Life is effortful, and death is the end of effort. So what is effort?

        As TAME points out, all intelligence requires some form of determination around a set-point. The altering of set-points toward increased efficacy is called learning. There can be no learning without determined effort. We can say life and determined effort are highly aligned, if they’re not in fact the same thing. Either way, exploring the mystery of effort could be a key toward understanding the mystery of life.

        So, by what means is effort even possible? The effort to coordinate biological systems can’t be measured, though we can see its effects. We know determined effort occurs at each and every level of intelligence. So what is it?

        Somehow, we have to be able to talk about these unmeasurable phenomena in a meaningful way that doesn’t devolve into faith-based assertions. We need to combine complete scientific rigor on what we can observe and measure, with a logically consistent approach to what we can’t observe and measure.

        (FWIW, I suspect that determined effort is to TAME as what light is to digital photography. All the hardware and software of the cameras, lenses, sensors, codecs, filters and editing tools are the specific means by which images are created and refined, but no photographic imagery at all is possible without light.)

        1. Micah Zoltu Avatar
          Micah Zoltu

          I think where we perhaps differ, and where I think I may differ with Michael based on this article, is whether these things are unmeasurable, or hard to measure (where hard may mean impossible for us today).

          I’m quite skeptical about claims of absolute unmeasurability, including well accepted ones like the Uncertainty Principal.

          I do agree with you that there is value in being able to communicate about things that are too hard for us to measure today, but when it comes to building frameworks I think we should be clear when we mean truly unmeasable vs just hard to measure.

          1. Tony Budding Avatar
            Tony Budding

            Sure. I personally have no stake in where the boundary between the currently unmeasurable and the truly unmeasurable may be, though I do suspect there are truly unmeasurable phenomena. Happy to be wrong, but we’re a long ways away from knowing. In the meantime, the difference is insignificant when it comes to the need for how to work with the unmeasurable.

          2. Mike Levin Avatar
            Mike Levin

            That’s the interesting part about the GRN example. Nothing here is unmeasurable – everything is right there, deterministic and easy to measure. We made the code, we know what’s in it, every part is available for inspection. That’s the point – it’s a minimal system where we can see and measure every part of it, and still there are limits on an external observer’s ability to understand what they are seeing (trained or untrained network). So being able to measure everything is not enough, and if it’s not enough even in a simple GRN, how much more it will be true for systems like brains.

            1. Micah Zoltu Avatar
              Micah Zoltu

              Note: Your comments section seems to have a depth limit, so replying to a comment in a new top-level comment (sorry for anyone only following that one thread!)

              > That’s the interesting part about the GRN example. Nothing here is unmeasurable – everything is right there, deterministic and easy to measure. We made the code, we know what’s in it, every part is available for inspection. That’s the point – it’s a minimal system where we can see and measure every part of it, and still there are limits on an external observer’s ability to understand what they are seeing (trained or untrained network). So being able to measure everything is not enough, and if it’s not enough even in a simple GRN, how much more it will be true for systems like brains.

              I think there is a big difference between “our puny brains are not capable of predicting the behaviors of X from its [inputs, internal_state]” and “it is impossible to predict the outcome of X from its [inputs, internal_state]”. I can definitely get behind the former.

              I can even imagine an algorithm/intelligence/program that is maximally “compressed” such that anything that can predict its outputs from its [inputs, internal_state] must necessarily be bigger/more complex than the thing whose behaviors are being predicted. It is therefore easy to get into a situation where a human brain (a fairly general purpose tool) cannot predict the behaviors of a sufficiently complex thing. I can also accept that we may not have sufficient tools to predict it even using computers and such. My only real argument I guess is just that I don’t think this means we cannot ever predict it’s behavior by leveraging sufficiently powerful tools. It is not immeasurable, it is just immeasurable by us today.

              With regards to your work on human morphology, I find it incredibly unlikely that our 100 billions of cell brains are capable of even comprehending the behaviors of a network intelligence that is trillions of cells big, basically for the reasons outlined above. This doesn’t mean we can’t maybe figure out how to communicate with it or even control its behavior though! My cat has a much smaller brain than me, but is able to communicate with me and control my behavior in ways it finds beneficial to a limited extent. If my cat had access to AI, I bet it could do an even better job.

        2. Mike Levin Avatar
          Mike Levin

          “Determined Effort” – yes indeed! very key.

  7. Joseph McCard Avatar
    Joseph McCard

    The brain does not contain memories. The brain is a transducer. It communicates information to and from the non-physical mind.
    Memory is non-physical and alive, not dead and inert. It is a container that receives and forms in-formation. Memory holds and attracts information.

    1. Mike Levin Avatar
      Mike Levin

      On that model, do non-physical minds only connect to neurons? or to any other physical objects? Which ones? What determines whether something (a brick, a robot, a non-neural organoid, etc.) can transduce a non-physical mind? When we make a synthetic organism in a new form that never existed on Earth before, does it get its own non-physical mind, and if so, what kind (and has it been around before, waiting for a proper embodiment to show up)? These are all issues that such a model needs to address.

      1. Joseph McCard Avatar
        Joseph McCard

        Mike, A mind is a Gestalt of conscious energy that forms and interprets its reality. The brick itself, is also composed of conscious energy. A physical brick is a pattern of conscious energy, it is vitalized aware, charged, as such, it communicates through transduction with its non-physical counterpart.

        A synthetic organism exists in what Leibniz, Everett, and David Lewis refer to as All Possible Worlds, an infinitely creative realm of conscious energy. That possible organism, composed of conscious energy
        innately endowed with the desire for growth and creative organization.

        I imagine these answers would lead to additional questions.

  8. Matt Avatar

    I have a hard time visualising that concept: “the memory is in the relationship between the network and the observer”. Like the observer modifies his way of looking/poking at the network ? Surely there must be a physical trace for that , somewhere?

  9. Mike Levin Avatar
    Mike Levin

    It’s all in the scale at which you observe. If you look at the electrons and protons in a piece of computer memory, you won’t find a physical trace – these components cannot be modified (except as someone pointed out by spin perhaps) – the memory is only there if you look at the right level of organization (the larger-scale pattern), and, in some systems (like the dynamical GRNs) if you get to poke the system (you’re an interactive observer). It’s sort of a relativity principle (as explored in https://pubmed.ncbi.nlm.nih.gov/23386960/ and https://pubmed.ncbi.nlm.nih.gov/36975340/ for example), the information appears if you as an observer interact with the system in the right way. In our case, the trained and un-trained GRNs are precisely the same with respect to their structure – we don’t allow connectivity or edge strength to change during the learning process. Observer can’t see the evidence of training just by observing the structure of the network itself.

    1. Matt Avatar

      So what is actually “changing” when you train a network? Components of the network just decide to respond in a different way ?

  10. Joseph McCard Avatar
    Joseph McCard

    The brain is a transmitter and receiver that communicates, by transduction, with the non-physical mind (outside the EM spectrum), of forms of conscious energy (in-formation)
    There are no permanent memory banks in the brain. The brain sends information the body experiences to the mind.
    The mind creates living vital memories, not objective, for-ever available banks of dead inert information.

  11. Turil Cronburg Avatar

    The difference between living things and non-living things, as I’ve defined the term, is that living things have (effectively) independent needs (goals) in relation to their environment, while non-living things do not. This means that living things (carbon/protein/biological or whatever) “behave” unpredictably, as viewed from outside, since they act on internal motivations to do something to change their state to serve their needs (both input and output).

    Personally, I don’t call the basic level of life-serving-its-own-needs “intelligence”, as I reserve that for far more complex motivations of serving the needs of the self, a companion, and the larger community/environment all at once — objective perspective taking for technological/sociological problem solving. But the physical level of serving one’s own needs is definitely consciousness, in my definition of the word.

    This is why AI/computers/software is not conscious, or alive, in my definitions. The computers have no independent goals to serve their own needs. They are simple (but complex) calculators that just process information given to it using formulae/code given to it. Sure, it looks similar to what we animals can do, as far as some outputs, but the process is very different.

    Now, you can say that we animals are doing the same thing, processing information based on code we’re given (genes + environment), and to some extent that is indeed how we work, but part of that programming is to have independent needs that we are motivated to serve. Maybe someday we can design computers (robots, most likely), that are programmed to care about their own needs, and aim to serve them, and, hopefully, to be intelligent, as well, so they can care about their own needs, the needs of their companions, and the needs of their larger community/environment, so that they can also create technology/sociology that solves problems for all of us together.

  12. Joseph McCard Avatar
    Joseph McCard

    Unfortunately, science has bound up even the most original thoughts of Mike’s brilliant mind, for he has not strayed from certain scientific principles. He
    has not taken into account the consciousness factor. All energy contains consciousness. That one sentence is basically scientific heresy.
    His recognition of this simple statement would indeed change his model.

    yours friendly, joe

    1. Tony Budding Avatar
      Tony Budding

      Joe, two simple but important questions: what is your definition of consciousness and how do you know?

  13. David Corfield Avatar
    David Corfield

    We might see the field of psychiatry/psychotherapy through the lens of your persuadability figure, from lobotomy to CBT.

    “So then, why is it pretty easy to understand (even if not perfectly) the content of your own mind and the engrams of memories you formed?…you… are constantly intervening in your own cognitive medium”

    Interesting to think then of those contents of one’s mind that are much more difficult to understand by oneself. Psychodynamic therapy is looking to avail itself of a form of thinking that takes place outside of awareness.

    1. Mike Levin Avatar
      Mike Levin

      That’s interesting! You are then sort of outsourcing some of the needed perturbations to someone else who can poke at your cognitive structure better than you yourself can, in some ways.

      1. David Corfield Avatar
        David Corfield

        Yes. Just the provision of a regular space in which one is to say whatever comes to mind already changes things. And then there’s the non-judgemental attention of someone listening, the consideration of dreams as meaningful, the skilful direction of shared attention, and so on.

  14. Joseph McCard Avatar
    Joseph McCard


    Thanks for your questions.
    I first read the comments you gave above. I start with the last: “how to work with the unmeasurable.”
    The scientific approach works quite well in certain situations, such as scientific measurements, but all in all, as it is understood and used,
    does not work as an overall approach to life, or in solving problems that involve subjective rather than objective measurements or calculations.
    These methods work least of all for any art. It is a trite statement, perhaps, but the ruler’s measurements have absolutely nothing to do with
    the measurements made by the heart, and they can never be used to express the incalculable measurements that are automatically made by the smallest cell.
    I have no doubt that Mike’s intellect is brilliant, but like many in the sciences, it is isolated in both time and space in a way other portions of his personality are not.
    These tendencies are not natural to his intellect but only when it is forced to operate in such an isolated fashion, isolated from other portions of his personality that are meant to b ring him additional information, in a kind of natural support.
    The rational scientific approach suits Mike, while it does carries its disadvantages.
    Your 2 questions, requests for definitions, and proof are examples of the same approach. Consciousness cannot be defined for you would limit what it is, it can only be expressed and released by each of us, and it cannot be proven, as Godel, Heisenberg, and science’s experimental method have suggested, experiments constantly being re-calibrated as new information appears. They are dealing with narrow and limited amounts of information, as your comments above suggest.

    “how to work with the unmeasurable.” I suggest follow your feelings, impulses, dreams, intuitions. That information can then be analyzed and evaluated by your intellect.

    Following that approach, I have affirmed that consciousness is a form of energy and all energy is conscious. Consciousness creates all forms of matter and energy. There are what can be called units of consciousness (see Leibniz’s monads for example) that have properties that hold together and create a logical framework that resolves most of the issues you raised in your comments above. Your comments on music were particularly meaningful to me as my former prof. Victor Zuckerkandl had much to say about music and consciousness.
    If you have any further interest in this please feel free to ask, and post it hear as I will monitor this forum, or you can email me at
    ammccard@gmail.com., and I am on X and #consciousness daily. And perhaps Mike will chime in.

    E = C => mc^2

    1. Tony Budding Avatar
      Tony Budding

      Thanks Joe. What Michael is doing here is an open-source discussion around modular intelligence, which is a scientific pursuit. Science is the process of hypothesizing causes for knowable phenomena in such a way that they can be both tested and validated. He’s sharing his knowledge publicly, which is both an act of generosity, and potentially an effective way to further develop his theories of cause and effect related to intelligence.

      The first criteria for such endeavors is defining key terms unambiguously, such that an effective discussion and debate can happen. You seem to be proposing non-scientific approaches, which is your prerogative, but doesn’t further this discussion or address the purpose of this free website.

      There is a very slippery slope when discussing unmeasurable phenomena of falling into untestable claims. If we wish to further practical knowledge of cause and effect, we must hold extremely rigorous standards of logical reasoning tied to the boundaries of what can be observed and validated.

      There is certainly more to life than science, but I believe we should respect the Scientific Method when posting on a science-based website.

  15. Joseph McCard Avatar
    Joseph McCard

    That is a belief you have. That is not my belief. Science is in a position where your private experience does not correlate with what you are told by science (as evidenced in your comments about immeasurability, for example). This “unconscious” immeasurable knowledge you speak about, will be expressed by science, under and with the direction of an enlightened and expanding egotistical awareness, that can organize that neglected knowledge, or it will be done at the expense of the reasoning intellect, leading to a rebirth of superstition, chaos, and unnecessary wars between reason and intuitive knowledge that has become so obvious today. Cults and factions have emerged, each
    unrestrained by the use of reason, because reason, such as used by mike and you evidence, have denied the existence of rampant unconscious knowledge, disorganized and feeling only its ancient force.

    Do you want to take credit for that?

    yours respectfully, joe

  16. Mike Levin Avatar
    Mike Levin

    Here’s how I think about it. Not whether it’s science or something else – I’m willing to think about any hypothesis – no matter how wild or unconventional. But the question I ask is: what does it facilitate me to do next? What practical progress can be made if we take this on? What does it help us to do, that we couldn’t have done without that idea? If an idea meets that criterion, then we’ve got something. Otherwise, not. I’ve heard a lot of pronouncements about these topics that are nearly impossible to squeeze any utility out of. And it’s not up to others to flesh out the implications of these kinds of ideas – it’s up to whoever proposes/defends them to make it clear what their utility is, to say how taking on this idea will elevate capabilities, lead to new experiments, and new research.

    1. Tony Budding Avatar
      Tony Budding

      What is the utility in being able to talk about the unmeasurable in a substantive way, even to the point that we can establish causal relationships between inputs and outputs? You’re already talking about the unmeasurable, only without detailed causal relationships.

      For example, in the Big Think video you did on the self, you said, “And I think it’s really important to understand that the contents of your mind, your self model, your model of the outside world, where the boundary between you and the outside world is—so where do you end and the outside world begins—all of these things are constantly being constructed and created.”

      What is the reality to this constructed self-model? We each have our own, but we also relate to each other’s. The specific details and characteristics of these models are hugely impactful on our experiences in life and our competencies. They are both stable and malleable. Not only do we have some ability to influence the characteristics and nature of our constructed selves deliberately, we can work hard to develop specific skills that increase that ability (similar to how physical training allows us to lift heavier weights).

      In spite of all this playing out, none of us can measure the self or its characteristics. We can’t prove we have a self, or what its characteristics are. Yet, there are all these causes and effects around it. What can we say about, and perhaps more importantly, how can we test and evaluate, any governing principles of how the self is created, maintained and evolved?

      The self is actually quite difficult to work with because its so enmeshed in layers of causes and effects in both directions of creating content and experiencing it. Let’s take something much simpler.

      In a similar constructive dynamic as the self, we build and maintain experiential models of the physical world. If you close your eyes, can you count the number of turns required for you to walk from your bathroom sink to your bed? Of course you can. Why? Because you have a detailed map of your home in your mind that’s so real you can not only recall simple memories of it, you can interact with it. We can’t say this model doesn’t exist, but neither can we say “where” this model exists without some additional theoretical constructs.

      Furthermore, you wrote, “So then, why is it pretty easy to understand (even if not perfectly) the content of your own mind and the engrams of memories you formed? It’s precisely because the relationship with your own mind is constant functional intervention. Via active inference and other strategies, you (the emergent virtual governor) are constantly intervening in your own cognitive medium (which is harder for others to do from the outside).”

      What are the characteristics of this relationship with your own mind? This begs the question, if “you” relate to “your own mind,” what differentiates you from your mind? Can “you” relate to your self also? If that’s the case, and it is, it means there are different aspects to the self. Are these relationships perpetual, or do they come and go? Are they determined, or can we influence them deliberately through our own efforts?

      So, here’s my hypothesis. There is a reality to all the constructed content in the mind (including the self), and this reality has knowable rules of cause and effect. These rules are often quite different from the rules of physics and biology, but they’re rules nonetheless. These rules address not just the constructed content, but also the construction of content, which includes learning. While names don’t really matter, for convenience, we can call this realm of learning, intelligence, and the construction of content the Experiential Realm.

      There are many inherent challenges working with experiential phenomena, such as the inability to independently verify mental content. But these challenges do not invalidate the cause and effect relationships. We already understand this intuitively. Any of us who teach students know that learning is a skill that can be improved. We give our students tips and tricks for how to alter the content in their minds to be more effective. One of the main reasons we have graduation ceremonies is to optimize a shift in their sense of self from student to graduate.

      By what means could we test these hypotheses? This is a great question, and I hope the answer expands dramatically in the near future as we find more ways to talk about them. Today, there is at least one very powerful means, which is one-pointed concentration.

      One-pointed concentration is, as the name suggests, the practice of focusing all our attention on a single object. Try it. Pick anything, or start with the image of a three dimensional number 1. Just hold the image in your mind without thinking about anything else at all. It turns out to be impossible at first. There is way too much momentum in the layers and layers of mental content to stop the flow.

      Now, if someone understood the causal mechanisms by which that momentum is generated, they should be able to insert into the process and lighten the momentum. And if someone had thorough knowledge of how content is created in the first place (experientially), they should be able to arrest all content in the mind.

      So, if you think you understand the mind but can’t seem to hold one-pointed concentration for any length of time, you’re probably mistaken. This is the accountability piece, albeit limited to our own experience. Still, for those of us interested in testing the integrity of our theories, one-pointed concentration is a great tool.

      There are ways to extrapolate from these human process of creating content to non-human and even non-neurological determinations and learnings. Intelligence is modular, as is all the content in the mind. The parallels are significant and meaningful. The physical mechanisms that are the focus of TAME are not my area of interest. I’m just hoping to help expand the scope and efficacy of these explorations through an understanding of the principles of cause and effect in the Experiential Realm (or whatever we’d like to call it).

  17. Pamela Lyon Avatar
    Pamela Lyon

    Thank you, Michael! Very clear statement of the perspective that gave birth to this open-minded and incredibly generous gift of a website. May I remind everyone (including myself) that if we wish to declaim our views in unwavering, impenetrable, unequivocal terms we totally violate the spirit and purpose of this gift. X (formerly Twitter) exists for that. Mike doesn’t need us, although we may occasionally say something useful to him. He is trying to share, in the hope that what he shares will benefit others. I, for one, am incredibly grateful because it stimulates my mind (physical or non-physical it’s all belief at this point). We are not entitled to his thought. I can barely believe he makes time in an inconceivably over-committed life to write what he does. This is an act of almost unbelievable generosity. I, for one, will try to be worthy of his largesse.

  18. Joseph McCard Avatar
    Joseph McCard

    Thanks to Mike and Pam, and I acknowledge Tony’s contribution.

    Where is the memory. The framework you use to understand memory is based on current scientific practices and principles. If you change the framework, the answers are quite clear. So, in terms of practically, please consider this:

    You are a Gestalt of aware energy, energy that is detected physically, and remains undetected nonphysically, as it is outside the EM spectrum. Your non-physical mind, or psyche, contains your memories. They are EM type patterns, best represented as a dynamic conscious field, plastic, constantly changing through your changing beliefs and experiences. Hence, for example, you cannot objectively read those memories. Your memory is there. It is not in the brain. The brain simply transduces the the energy patterns created by the physical cells, and transmits it to the mind. The mind does the same in reverse.
    I add that, emotions are highly concentrated patterns of energy, your emotions trigger your memories and create memory events. For example, if you remember your visit to your great aunt’s farm during summer vacation as a child, this will trigger a kind of domino effect, whereby, many different memories from that visit become apparent, as also other aunts, for example.

    If you need to get your footing here, Leibniz’s Monadology is quite helpful.

    Well. that islikjely much to absorb, so I leave you with the following quote about energy itself:

    Feynman, said that: “it is important to realize that in physics today, we have no knowledge of what is energy. We do not have a picture that energy comes in little blobs of a definite amount.”¹

  19. Joseph McCard Avatar
    Joseph McCard

    WRT Chris Fields mention of Anthropology and indigenous tribes, there is a great book by an Anthropologist. Peter Skafish, discussing language issues, and the issues I am raising, in his new book, “Rough Metaphysics”.

  20. Joseph McCard Avatar
    Joseph McCard

    Coming out of the effects of anesthesia, you reconect (as Chris says, re-coupled) and refocus your body consciousness to your conscious mind, like dreaming. Think, for example amnesia, blindsight are relevant here, as is the term “resonance”. I should mention that all the cells in the body communicate with the mind. We focus on the brain because that is where the greatest concentration of cells is.

    WRT Chris Fields comments about miscommunication, and relevant to memory: Information does not exist by itself. Connected with it is the consciousness of all those who understand it, perceive it, originate it. So, there are no records, and no memory, of forever-available objective banks of information into which you tune. So, for example, a memory is a dynamic plastic interactive field that is a container that receives information, organizes it, and can transmit it.

    respectfully, Joe

  21. David Lemmer Avatar
    David Lemmer

    In the vein of “where is the memory” I would like to raise a question in a neighboring space “where is the behavior?”. I’ve been trying to think of a statement that could form the basis for basal cognition at the atomic/molecular level as well as a relational structure that would allow it to scale. I’d enjoy hearing the feedback (supportive/indifferent/critical) from Mike et. al. and those on this forum around this as a statement for basal cognition: 
    “The basal goal state of matter is to maintain the interactions and outcomes within the rules of physics/mathematics”?

    with a scaling relationship that follows the form of ” sub-scale matter set maintains sub-scale goal and is scaled up through super-scale feedback mechanisms towards super-scale goal ” 
    e.g. the Gene regulatory agents maintain the rules of physics and mathematics in its interactions and outcomes and these are scaled up when placed into a Gene regulatory network configuration which maintains interactions and outcomes according to states stored within its memory.
    The  reason I have been trying to think along these lines is that there seems to be a gap in our ability to describe, characterize and understand the dynamics/interations across scales. I think having a formalism like this may aid in forming and testing hypotheses that describe how to effectively interact with agential/persuedable materials. While people talk about basal cognition and wrestle with what level of panpsychism they are willing to accept; I’ve not seen attempts to articulate formalisms that can be tested. 
    I also think finding formalisms that describe basal cognition and how behavior scales from seemingly deterministic to stochastic outcomes could help research by allowing them to be encoded into machine language for simulation. Simulation plus a good algorithm capturing patterned formalisms has already been proven as a good way to find paths towards interesting truths in the Levin lab. 

  22. Joseph McCard Avatar
    Joseph McCard

    I noticed Mike had the bongo drums sitting on the floor of his office and it reminded me of a picture in the beginning of Feynman’s “Lectures on Physics”
    So it may be you would be more inclined to listen to him? I don’t know if this link below will work, if not, just do a google search on “Richard Feynman – The World from another point of view”

    Thanks for listening, joe


  23. Joseph McCard Avatar
    Joseph McCard

    My aim is practical. The goal is to help each individual solve his or her own personal problems. Both the noetics and the therapeutics are here in germ. Reality is produced by thought
    and then emotions, and it is strictly causally subsequent to them.

  24. Joseph McCard Avatar
    Joseph McCard

    Mike Levin, Consciousness and existence do not result from delicate balances so much as they are made possible by lack of balances, so richly creative that there would be no reality were balances ever maintained.

    I’m still waiting!!

  25. Joseph McCard Avatar
    Joseph McCard

    a cognitive bias..for high performers (like Levin)..the Dunning–Kruger effect is often misunderstood as a claim about general specific overconfidence of people unskilled at a particulr task. Here, Levin, and David Kolb, misunderstands, is unskilled, at understanding life.

    still waiting…

  26. Joseph McCard Avatar
    Joseph McCard

    “Dialectics can help AI systems to generate novel and creative solutions..” (Levin Q&A)

    oopsie! Do a google search on “limits of AI”

    still waitin’.

  27. Joseph McCard Avatar
    Joseph McCard

    “Consistent with this assumption, by selectively damaging…brain regions, one could suppress or evoke cognitive or behavioral responses that confirmed suspected structure-function relationships.”

    Levin believes that he can understand life, and consciousness, by destroying them.

    still waiting.

    1. Mike Levin Avatar
      Mike Levin

      Indeed, loss of function approaches have been used for many decades to provide useful information about necessary parts of the mechanism (not the complete answer obviously). Don’t like them? Great – come up with a better approach and execute on it. Here’s what’s actually useful to others. Make a clear statement as to what your framework has allowed you to do. New experiments? New predictions? Capabilities we couldn’t reach before? Improvements in your or others’ life? Something else? That’s a way to add value for the community and provide evidence that your views are useful for progress. It’s easy to make claims about consciousness, the limitations of this or that, and critique other frameworks, but it’s not going to be useful to anyone until you can draw a clear path from “I thought X, and it enabled me to do interesting thing Y which we couldn’t before”. If you ca do that, then a healthy discussion comparing worldviews can take place. Otherwise it’s just empty verbiage and will spark no joy.

      1. Tiffany Avatar

        Well said. Varying viewpoints can be healthy and transformative. Providing useful ideas and findings, even differing ones, can lead to more solutions and progress. That should be the goal ❤️

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: