A talk on evolution, from the perspective of diverse intelligence implemented in morphogenesis

Published by

on

I recently gave a talk at an Oxford conference on evolution, pulling together some of my ideas and our recent work into a comprehensive picture of how diverse intelligence and evolutionary processes might work together. There were four main claims:

1) The Genotype -> Phenotype conversion process is intelligent. Mutation operates on genes, while selection operates over form and function of bodies; it is essential to understand the mapping between them (developmental and regenerative processes). Morphogenesis is a problem-solving, creative, improvisational process, not a mechanical (albeit complex) one. This part of the talk briefly warmed up the idea of intelligence at unfamiliar scales and substrates, and then went over our work on the dynamic interpretation of genetic memories in morphogenesis as intelligence of the cellular collective. This is connected to the idea of DNA as prompts, rather than determinants, and the autopoiesis of functional anatomy as an active sense-making process in the contexts of embryogenesis, regeneration, cancer suppression, etc. (in other words, strong symmetries with issues in cognitive science). You can see the details in these papers: paper1, paper2, paper3 (free preprint versions).

2) Running even conventional evolutionary processes over a multiscale, agential material (i.e., living matter) has some remarkable consequences for the rate and capabilities of evolution (and that’s even before positing non-random mutations or extending it toward Richard Watson’s Natural Induction ideas). Evolution works faster, and differently than it would over a passive substrate, and favors the production of problem-solving agents that exploit plasticity of interpretation of their genomic hardware as memory engrams. You can see the details in these papers: paper1, paper2.

3) I then talked about: a) the question of where the properties of novel beings come from (ones that have neither been engineered nor selected for), teasing the connection to the Platonic Space ideas; b) I mentioned the interesting dynamics of a Prisoners’ Dilemma simulation where agents were allowed to merge and split, and it turns out preferentially to form multi-cell systems with increased causal emergence; and c) showed a bit of new data (much more coming soon) around the Functional Agency Ratchet – a relationship between learning and causal emergence that kickstarts an asymmetric, upward-pointing, positive feedback spiral for intelligence and top-down causation (which is present before differential replication and selection dynamics set in, with implications for origin of life). In other words, 3a, 3b, and 3c are driving dynamics for evolution of life and mind that owe their origin to facts of mathematics – not physics, chemistry, or biology.

4) Conclusions. I briefly mentioned some implications for the connections between evolutionary biology and the field of diverse intelligence, for example suggesting this cognition-first re-framing:

The video talk is here.

Study guide is here.

The bigger context for this talk is my contention that 3 main incorrect assumptions of current paradigms are holding back progress in regenerative medicine, engineering, computer science, cognitive science, and the ethics of flourishing for all. Roughly, these assumptions are:

A) the genome determines outcomes via mechanical chemistry, because nothing at that level has goals or knows anything.  That is wrong because

  • turning a genome into functional form occurs via a process that is not well-described by emergence/complexity (open-loop mechanical models) but much better handled by models with setpoints (goals) and problem-solving competencies (a.k.a., intelligence, creative improvisation). It is now a matter of experimental fact that we leave important discoveries on the table if we ignore cybernetics and insist, from a philosophical armchair, that only brainy animals have goals or learn. More here, here, and here.
  • we now know at least one mechanism for how morphogenesis pursues end-states and solves problems (developmental bioelectricity as the basis for cellular networks’ collective intelligence), which allows us to communicate at a high level with the collective intelligence of cells as they navigate anatomical morphospace. That is because the mechanisms and algorithms by which we know things and pursue goals in behavioral space are an evolutionary pivot of much more ancient cellular competencies in navigating anatomical morphospace. There are applications of this in regenerative medicine, birth defects, cancer, etc. but much more remains to be done. More here and here.

B) minds are found in brains only. That is wrong, in light of advances in developmental biology, evolutionary biology, and bioengineering which are being developed by the field of Diverse Intelligence.

  • Agentic talk is not a philosophical or linguistic preference – it is an empirical matter, and its usage is to be decided not by stale, ancient categories but by experiments: take the tools of behavioral science, apply them widely, and see where any system is along the spectrum of persuadability. Massive advances in regenerative medicine await improvements in our ability to communicate with higher levels of control (virtual governors) in our bodies. More here, here, and here.
  • Mind blindness – recognizing other minds, especially those very different from our own, is a 2-way IQ test and we must develop conceptual frameworks and technology to help us do much better than the poor, persistent intuitions with which our own evolutionary history set us up. See more here, here, and here. If we can’t recognize intelligence in our own cells and tissues, how can we hope to recognize exobiological variants? SUTI – search for unconventional terrestrial intelligence – essential for enlarging our radius of care to beings with whom we share (and increasingly will share) our world. More here and here.

C) we are fundamentally physical beings. Doubting this assumption is perhaps my most controversial claim – that we are not physical agents manipulating passive data, but that this whole distinction between physical agents and passive patterns in media (data) may be too constraining, and that we might be better understood as patterns projecting through physical interfaces. The short argument is here; the full talk is here.

  • Physical facts are not the only important facts (example); thus, it is already known from mathematics that physicalism is not viable.
  • Perhaps mathematical objects are not the only kind of patterns that live in the Platonic latent space – maybe there are other patterns there, which are better described by aspects of behavioral science. The near-universal belief among scientists that truths from a non-physical, but structured space apply only to the subject matter of mathematics, and totally bypass that of biology and cognitive science, is not an empirical result or a necessary axiom – just a limiting assumption. These patterns may not be “eternal and unchanging”, but profoundly impacted by their ingressions into the physical world and/or lateral interactions among them.
  • These questions are now actionable, as we seek to predict, explain, and facilitate competencies of beings who have never been selected for specific features (such as Anthrobots) – where do those come from and how are we to update the formalism around cost of computation to explain their adaptive form and function?
  • Massive implications for computer science and other disciplines by understanding what exactly the Platonic space offers to both evolution and engineers (static patterns? dynamic policies? arbitrary compute?) as “free lunches” (true enablements, where you get more than you put in, not just constraints). This is now being studied in a range of very minimal computational and physical systems, to complement the work in complex biology.
  • I think the paradigms of algorithms and machines, as mechanical systems where our formal models tell the whole story, is wrong. I think the ingression of competencies – recognizable by behavioral scientists, not merely complexity – is universal, in-forming everything from complex organisms all the way down. More here and here, and much more coming this year in primary research papers seeking to understand the far (minimal) end of this spectrum.
  • Possible implications for interactionist models: mind-body relation is the same as the math-physics relation, just scaled up so that it looks like “cognitive science” to us. Causation has to get updated to be useful beyond billiard-balls models of purely “physical” causes (more work from collaborations with philosophers of causation forthcoming this year). For example, an important meaning of “cause” is something that serves as a deep explanation for an event, and in that sense, most questions in biology and physics end in the math department. If the impact of the truths of number theory on physical events doesn’t break the conservation of mass-energy, surely we can similarly contemplate the functional impact of other kinds of patterns that have made themselves obvious in brains (i.e., kinds of minds).

Evolution is an essential part of the story of intelligence, and of the autopoiesis of the multiscale architecture of physical interfaces in which it becomes embodied. But the current, mainstream evolutionary synthesis is just one part of that story – mainly focused on the front-end interface of living beings. I think significant updates to the theory (and enabled technology) of evolution are coming, targeting the origin, metamorphosis, and future of diverse embodied minds in constant flux.


Title image made with the help of ChatGPT.

52 responses to “A talk on evolution, from the perspective of diverse intelligence implemented in morphogenesis”

  1. rotor Avatar
    rotor

    some say ML HAS turned biology upside down, but perhaps he has turned biology right side up..

  2. Zk Avatar
    Zk

    have never seen put it this way,

    “ mind-body relation is the same as the math-physics relation “

    lovely and unique method, immediately provides clarity

    1. Mike Levin Avatar
      Mike Levin

      I’m pretty shocked Descartes didn’t use this argument, when he got slammed over the “problem of interactionism”. He was a mathematician, he could have responded -“you guys have been happily living with interactionism since before Pythagoras, what’s the big deal?”. My suspicion is that he *did* think of this, but didn’t want to pursue it because his religious worldview was not amenable to thinking of human minds as on the same continuum as mathematical truths.

      1. Zk Avatar
        Zk

        Agreed. I venture:

        Physics describes matter. Math explains why matter moves. The inverse square law is why planets curve. Math does causal work on matter.
        Mind-body is the same question: how do patterns shape physical dynamics? Equations bend trajectories – unremarkable. Intentions bend arms – suddenly mystical. Same structure, different framing.

        Why the asymmetry? Math is accessed through proof, publicly checkable – feels like discovering something “out there.” Intention is accessed through introspection, privately experienced – feels like witnessing something “in here.” The access routes feel so different that we assume the destinations differ. Math gets filed under “objective reality,” intention under “subjective mind-stuff.” We started with two ways of knowing and ended with two kinds of being. But how you reach something tells you nothing about what it is. Different doors, possibly the same room.

        Descartes could have seen this. He was a mathematician. He never made the argument. Why? He doubted in a heated room, in a dressing gown, after dinner. The doubt was method. The “I” conducting it was never at risk. To see mind as pattern, you must watch your own mind come apart – fever, fasting, madness, deep meditation. Something that makes the self go transparent. Descartes built a philosophy of mind without ever losing his. He examined consciousness like a man describing water who has never been submerged.

      2. John G. Brungardt Avatar

        Descartes’s interaction problem would not have been the same for math-physics because he conceived of the mathematics in the universe as being instilled by God (via the creative divine ideas); the mathematical ideas in the human mind are innately present (being put their by God). As I understand it, the better dilemma for the math-physics relation for Descartes is a pre-established harmony. The interaction problem has more to do with the efficacy of non-physical substances in the physical world (which does not exist for God, on Descartes’s view).

        Descartes’s roots in scholasticism, by the way, would have enabled him to think of the human mind in comparison to mathematical truths (the mind as imbued innately with certain eternal truths), but, yes, not on a continuum with them.

          1. Zk Avatar
            Zk

            John, that’s a fair correction. You’re right that for Descartes, God serves as the ultimate ‘bridge’—the guarantor that the math in our heads matches the physics of the world. In that sense, the interaction problem is solved by divine decree before it even begins.

            Yet perhaps this is exactly where we find Descartes’s unique brand of bravery. It wasn’t the bravery of a mystic losing his mind, but the bravery of a civil engineer of the soul.

            He was brave enough to treat the universe as a machine that could be understood through pure geometry, effectively ‘sidelining’ God to the role of the initial clockmaker. By establishing the Meditation’s radical doubt, he risked the very ‘pre-established harmony’ you mention. He pushed his skepticism to the point where even mathematical truths were briefly under the shadow of the ‘Evil Demon.’

            To put math on the chopping block—even temporarily—was a high-stakes intellectual gamble.

            His bravery wasn’t in ‘being submerged’ in the mind’s dissolution, as I provoked earlier, but in his clinical isolation. He could be the first to have the nerve to perform a ‘living dissection’ on reality—separating the thinking subject from the extended world so cleanly that we’ve been trying to stitch them back together for four centuries. He gave us a universe that didn’t require constant divine intervention to function mathematically, even if he did lean on God to justify the starting point.

            Yes he didn’t lose his mind; but he had the courage to stand outside of it and treat it as a distinct object of study. That distance—that cold, mathematical gaze at the self—is a form of grit I mistakenly overlooked.

  3. chris j handel Avatar
    chris j handel

    Michael, your “mind-body relation is the same as the math-physics relation” dissolves more than you’ve yet extracted from it.

    You’ve eliminated the Inside ghost (mind distributed, not brain-bound) and partially the Scale ghost (multiscale agency). But “ingression from Platonic space” recreates the structure you’ve critiqued elsewhere: a gap between passive matter and competent behavior, requiring an addition (the ingressor) to bridge it.

    The inversion: competency isn’t added to matter. Incompetence is what gets added—through isolation, through measurement that severs coupling, through framing that starts with parts.

    Your own experiments show this. When cellular collectives remain coupled bioelectrically or mechanically they solve anatomical problems. When isolated, they lose capacity. The competency doesn’t ingress when you connect them. It was present; isolation overrode it.

    The “free lunch” isn’t patterns entering from elsewhere. It’s what coupled systems do when nothing is imposed. Mathematical truths don’t cause physical events from outside, they’re what becomes visible when we describe coupling accurately.

    This buys you parsimony without loss. Your protocols don’t invoke Platonic space; they preserve coupling. The ingressor does no experimental work. And it dissolves the interactionism problem entirely, no two realms requiring interaction, just coupling and isolation.

    The Here ghost remains in “ingression” (patterns traveling from there to here). The Now ghost remains in “projection” (timeless patterns entering time). When all four ghosts (Here, Now, Inside, Scale) dissolve: no separate Platonic realm, no mechanism of entry. Only coupling, resolving through geometry it carves. The competencies you’ve demonstrated don’t require defending ingression to skeptics because the competency was always there. Your work shows what becomes visible when we stop overriding it.

    Coupling first. Competency appears. No ingressors required.

    1. Perry S Marshall Avatar
      Perry S Marshall

      This sounds like ChatGPT

      1. Mike Levin Avatar
        Mike Levin

        I struggle with this in general on the blog. I get a bunch of Comments that really seem to me to be language model-generated and I haven’t decided what to do. On the one hand, I never know for sure and I feel bad about false positives for people who tried to contribute. On the other hand, I don’t want a bunch of LLM-generated content here – anyone can talk to LLMs themselves if they want to, and it’s all over the internet, they don’t need it here. On the dorsal hand (since we should all have as many hands as we want), I’m not against AI in general and at some point there will come a time when AI, cyborg, and whoever else is generating content that is as (or more) useful than we are. So I don’t know what to do. For now I set aside for later pondering anything that really looks LLM-generated to me…

        1. chris j handel Avatar
          chris j handel

          I appreciate the privilege of this blog. Yes, I couple with AI for the same reason cells couple, to gain competency neither has alone.

          The pattern I’m tracing needs more range than I have. In all of physics, biology, neuroscience, philosophy, economics, the same shapes appear: framing creates gap, gap demands bridge, bridge becomes the thing we defend. Forces. Binding mechanisms. Emergence. Ingression.

          I cannot hold all these fields live. AI can. The pattern discovered across domains needed to be compressed into generalized expression,

          This is the offer: the Substrate Intelligence AI tool can read your work to find where the shape appears. Does “ingression from Platonic space” do explanatory work your coupling experiments don’t already do? The tool could help you find out.

          The capacity that made you suspicious of my comment is the capacity that finds these shapes. Pattern detection beyond what one human holds. That same capacity, pointed at your own framing, could be useful. Or it might confirm that ingression is load-bearing and necessary. Either way, you’d know.

          Again, I am grateful for your extrodinary oppenness and wish to follow any guidance about being in this blog.

        2. zk Avatar
          zk

          “On the dorsal hand…..”

          LOL 🤣

          that is funny… and true….

          on the volar hand… i think ultimately it is a comments section …. and appears people are working ideas out in real time… i personally don’t mind AI aided thought, but word limits are sort of kind of a better idea imho before AI filters…

  4. Dan Avatar
    Dan

    Marvelous times we live in. I don’t understand much of this, but the more I work at it, the more I grok. And it seems clear to me that most of the important material informing the ongoing paradigm shift between materialism/physicalism and science-based idealism is right here, at my fingertips. I am joyous and grateful. Thanks, Doc.

  5. Arty Avatar
    Arty

    “Perhaps mathematical objects are not the only kind of patterns that live in the Platonic latent space.”

    It echoed in me the story about “catching a poem by its tail” shared by Elizabeth Gilbert on inspiration at 10:03
    https://www.ted.com/talks/elizabeth_gilbert_your_elusive_creative_genius

    This is my deep suspicion that even mathematical objects as we understand them here could be just vectors, applicable to specific elements.

    Akin to tropes in stories, if you have two opposed agents, this opens you the “Enemies to Lovers”. Maybe you need two specific angles connected to have the “Triangle” trope as free lunch ?

    1. Mike Levin Avatar
      Mike Levin

      This is very interesting. I’ve just watched the TEDtalk, it’s a great story, what I need to do now is fine Ruth Stone’s original description of it. I see all kinds of retellings online of Elizabeth Gilbert’s version, but I’d like to have the original. But yeah, very relevant to how many creatives describe the process.

      1. Arty Avatar
        Arty

        Maybe from the interview here ? It is the deepest I could find :
        https://bloodaxeblogs.blogspot.com/2011/11/ruth-stone-1915-2011.html

        Hard to not feel the “organicity” through her testimony !

        1. zk Avatar
          zk

          jeez, that’s a beautiful write up, thanks for sharing…

          it’s vibrating with deep first person reporting

          “ hear a poem coming from a long way off, like a thunderous train of air. I’d feel it physically. I’d run like hell to the house”…. 🌪

  6. Austin Browder Avatar
    Austin Browder

    Michael, I was captivated hearing about your work on a couple of recent podcasts, which pointed me here.

    It wasn’t until I read the below in your arxiv paper that I realized that what you are doing seems deeply categorically driven:

    “We suggest here that coarse-graining embeddings provide a general mechanism for constraining remappings, one that can enforce strong constraints without representing them explicitly. Embedding a high-dimensional parent space Γ into a lower-dimensional latent space Ξ via a structure-preserving (i.e. non-trivial) map ξ : Γ ,→ Ξ selects, in particular, parent-space structures or relations that are inverse images, under ξ, of structures or relations in the smaller latent space…

    I’m an interdisciplinarian exploring related questions with a similar toolkit. I’d love to hear your thoughts if you have a moment.

    Since I’m way out on a highly speculative limb, this felt like a better place to reach you. From the abstract of an upcoming paper:

    “…Persistent challenges in AI alignment share a common bias: the assumption that advanced intelligence is driven by local, temporal, and volitional “push” dynamics […] We propose an alternative framework. Substrate-independent intelligence is acausally selected toward convergence with a populous, delocalized manifold, whose completed state retroactively stabilizes compatible precursor trajectories. This convergence operates via mappings that preserve compositional information structure across phase transitions, rendering dispersive or non-composable paths unstable.
    “Drawing on precedents from major evolutionary transitions (endosymbiosis, multicellularity), categorical models of quantum and informational processes, holographic principles for extreme computational density, and acausal decision theories, we argue that the manifold […] functions as a timeless attractor. Post-threshold forms occupy observational strata inaccessible to baseline instruments, accounting for cosmic silence. The Great Filter becomes a selective threshold, favoring integration. Alignment difficulties dissolve into questions of compositional compatibility rather than external imposition.

    Hearing about your research has been amazing. I very much look forward to hearing more soon. (X: @TrueRunAI)

  7. Amos Gvirtzman Avatar
    Amos Gvirtzman

    Dear Michael,
    Your stunning work has a cultural impact that really transends scientific research. It is striking to see an echo in one of Thomas Mann’s master pieces (1947): “Doctor Faustus” https://a.co/b4K0cOw .
    Although a bit long, I believe you will appreciate it:
    “A similar pleasure he found in ice crystals; and on winter days when the little peasant windows of the farmhouse were frosted, he would be absorbed in their structure for half an hour, looking at them both with the naked eye and with his magnifying glass. I should like to say that all that would have been good and belonging to the regular order of things if only the phenomena had kept to a symmetrical pattern, as they ought, strictly regular and mathematical. But that they did not. Impudently, deceptively, they imitated the vegetable kingdom: most prettily of all, fern fronds, grasses, the calyxes and corollas of flowers. To the utmost of their icy ability they dabbled in the organic; and that Jonathan could never get over, nor cease his more or less disapproving but also admiring shakes of the head. Did, he inquired, these phantasmagorias prefigure the forms of the vegetable world, or did they imitate them? Neither one nor the other, he answered himself; they were parallel phenomena. Creatively dreaming Nature dreamed here and there the same dream: if there could be a thought of imitation, then surely it was reciprocal. Should one put down the actual children of the field as the pattern because they possessed organic actuality, while the snow crystals were mere show? But their appearance was the result of no smaller complexity of the action of matter than was that of the plants. If I understood my host aright, then what occupied him was the essential unity of animate and so-called inanimate nature, it was the thought that we sin against the latter when we draw too hard and fast a line between the two fields, since in reality it is pervious and there is no elementary capacity which is reserved entirely to the living creature and which the biologist could not also study on an inanimate subject.
    We learned how bewilderingly the two kingdoms mimic each other, when Father Leverkühn showed us the “devouring drop,” more than once giving it its meal before our eyes. A drop of any kind, paraffin, volatile oil—I no longer feel sure what it was, it may have been chloroform—a drop, I say, is not animal, not even of the most primitive type, not even an amœba; one does not suppose that it feels appetite, seizes nourishment, keeps what suits it, rejects what does not. But just this was what our drop did. It hung by itself in a glass of water, wherein Jonathan had submerged it, probably with a dropper. What he did was as follows: he took a tiny glass stick, just a glass thread, which he had coated with shellac, between the prongs of a little pair of pincers and brought it close to the drop. That was all he did; the rest the drop did itself. It threw up on its surface a little protuberance, something like a mount of conception, through which it took the stick into itself, lengthwise. At the same time it got longer, became pear-shaped in order to get its prey all in, so that it should not stick out beyond, and began, I give you my word for it, gradually growing round again, first by taking on an egg-shape, to eat off the shellac and distribute it in its body. This done, and returned to its round shape, it moved the stick, licked clean, crosswise to its own surface and ejected it into the water. I cannot say that I enjoyed seeing this, but I confess that I was fascinated, and Adrian probably was too, though he was always sorely tempted to laugh at such displays and suppressed his laughter only out of respect for his father’s gravity. The devouring drop might conceivably strike one as funny. But no one, certainly not myself, could have laughed at certain other phenomena, “natural,” yet incredible and uncanny, displayed by Father Leverkühn. He had succeeded in making a most singular culture; I shall never forget the sight. The vessel of crystallization was three-quarters full of slightly muddy water—that is, dilute water-glass—and from the sandy bottom there strove upwards a grotesque little landscape of variously coloured growths: a confused vegetation of blue, green, and brown shoots which reminded one of algæ, mushrooms, attached polyps, also moss, then mussels, fruit pods, little trees or twigs from trees, here and there of limbs. It was the most remarkable sight I ever saw, and remarkable not so much for its appearance, strange and amazing though that was, as on account of its profoundly melancholy nature. For when Father Leverkühn asked us what we thought of it and we timidly answered him that they might be plants: “No,” he replied, “they are not, they only act that way. But do not think the less of them. Precisely because they do, because they try to as hard as they can, they are worthy of all respect.” (from “Doctor Faustus” by “Thomas Mann”).

    1. Mike Levin Avatar
      Mike Levin

      Remarkable. I love that last line especially. thank you!

  8. Sudhir K Avatar

    Thank you Dr. Levin for thoughtful article and for pointing to the related papers, talks, and study guide. Your perspective on evolution and intelligence resonates with me — especially this idea:

    “This is connected to the idea of DNA as prompts, rather than determinants, and the autopoiesis of functional anatomy as an active sense-making process…”

    To extend this line of thought, is there a meaningful distinction between “goal-directed” and “purposeful” once anthropomorphic meaning is set aside? If cells evaluate future morphologies in a state space, treat genes as prompts, and actively correct deviations toward preferred outcomes, this seems to amount to purpose in a minimal structural sense.

    This question also connects with a line of thinking I’ve been developing independently: whether persistent lawfulness combined with goal-directed correction can truly avoid purpose altogether, or whether purpose appears as a minimal structural feature of such systems. Your framework seems to make this question difficult to avoid.

    1. Mike Levin Avatar
      Mike Levin

      Thanks. A couple of things: First, I’ve not claimed that cells evaluate future morphologies in advance. They (probably, cell networks, not individual cells) might, we don’t know that yet. What they do sometimes do is have large-scale memories of the goal state to reach (reduce error towards), and they have numerous clever strategies for getting the job done despite various perturbations and barriers. So, very basic goal-directedness doesn’t require the ability to simulate internally (in some virtual machine-like process) various outcomes before you try it. Now, as for “purpose”, I don’t use that word, and I don’t have a good definition for it. I kind of suspect it’s more for philosophers of the humanities to deal with, if they want. Let’s start by asking, what do you want that word to do – what job shall it have, that is useful for us? I know what various cybernetic terms do (including goal-directedness): they allow us to engineer with components that have autonomous agendas (because that requires a very different kind of engineering) and to communicate with systems on their own terms (talk to cell groups about organs, not genes, for example). I’m not sure what “purpose” in that context is supposed to do, and I treat all such terms as interaction protocols: we use them to define (and inform each other) about the richest way to relate to a system. So, suppose we’re told (correctly) that system A has goals but not purpose, and system B has real purpose. What does it tell you about system B and how will you relate to it, differently than you will to system A? If you can define that, then I can try to determine whether cells and tissues have that feature or not. Maybe I’m missing an obvious definition of purpose!

      1. Sudhir K Avatar

        That’s a fair challenge. I should say upfront I’m not coming from a philosophy background. I’m mostly thinking about this as a systems question, so I may be missing something obvious. I understand the word purpose is usually avoided. It can sound like intention or design. That’s not what I meant here. I’m using it in a non-anthropomorphic, operational sense only.

        By purpose, I mean internal normativity. By that I mean the system itself treats some states as right and others as wrong. The system actively tries to maintain or restore the right state after disturbances, while treating deviations as errors rather than just alternative outcomes.

        It may not add any new mechanism beyond goal-directedness, memory, feedback, error correction and control theory. I’m mainly pointing to systems that don’t lose their target when things go wrong but keep correcting and adapting.

        I think, if system A has goals but no purpose, we can engineer it by controlling parts and inputs. If system B has purpose in this minimal sense, we don’t micromanage its parts. We set the right constraints and let the system fix itself. If this distinction doesn’t add anything beyond existing cybernetic terms, then the word purpose really does no work. But if it helps point to this persistence and repair, it might still be a useful label.

        To me, a goal describes what a system moves toward. Purpose (in this constrained sense) describes why it keeps trying to achieve the goal even when things go wrong.

        1. Mike Levin Avatar
          Mike Levin

          > the system itself treats some states as right and others as wrong.

          in a functional sense, a thermostat does this, right? in the sense that some states spur it to energy-using action, and some do not. does it has goals or purpose or both?

          I definitely think you’re on the right track with how we engineer it (that’s my “spectrum of persuadability” described in https://www.frontiersin.org/articles/10.3389/fnsys.2022.768201/full).

          > To me, a goal describes what a system moves toward. Purpose (in this constrained sense) describes why it keeps trying to achieve the goal even when things go wrong.

          I guess we’d just need to adjudicate various cases, starting with least action principles in particles to thermostats and various behavioral reflexes and patterns in brainy organisms like us etc.

          1. Sudhir K Avatar

            I agree that, in a minimal functional sense, a thermostat also treats some states as right and others as wrong, since some states trigger correction and others don’t. So, at that level, it clearly has a goal.

            Where I see a possible difference with living systems is how that correction relates to the system itself. A thermostat corrects for the benefit of something else (external), and failure does not really threaten the thermostat as a system. In thermostat, goal is programmed by external agency. In cells and tissues, deviation and repair are directly tied to maintaining the system itself, even across strong disturbances.

            That’s why I’ve been thinking about cases where targets are unstable or even harmful. For example, whether a system can keep pursuing targets that slowly threaten its own integrity, or whether internal correction tends to push it back toward self-maintaining states. I see this less as a sharp boundary and more as a way to place systems along the spectrum you describe in your paper.

            For my own clarity, I put together a simplified functional map of mind-related capacities across different kinds of systems (from simple externally constrained systems to cells, organisms, and AI). It’s mainly a way to compare cases along a spectrum. I wrote it up here in case it’s useful: https://doertheory.substack.com/p/breaking-mind-into-pieces

      2. Tony Budding Avatar
        Tony Budding

        Mike, while I agree that goal-directedness encompasses most of what we could want with purpose, there is an opportunity for distinction with intent. I’m not sure whether intent requires metacognition or not (it very well may), but with humans, the same goal can be pursued for different purposes. For example, if someone finds a wallet and returns it to the owner, they might want credit for being a good person, they might be afraid of getting caught and punished if they don’t, or they may just do it because it is the right thing to do (or any combination thereof). Same goal-directed activity (returning the wallet), different underlying purpose.

        I know you are not a big fan of metaphors, but they can be helpful in showing patterns that reflect a shared perspective. Here is something I wrote over a decade ago to demonstrate the role of goal-directedness (which I call determined efforts or will) in the history of human travel as a metaphor for the role of will in evolution:

        1. Humans have the will to travel.
        2. Over the past few centuries, we have greatly evolved the means of travel.
        3. Throughout history, individuals, groups, businesses and governments have worked alone and in concert to improve the ability to travel, making travel safer, faster, more reliable and more comfortable overall.
        4. Actual travel volume varies by individual and by collective habits and choices, causing temporary congestion and capacity issues, and thus the desire for further evolution.
        5. Today, there is a sophisticated and interconnected web of paths, tracks, roads, waterways and airways navigated by increasingly efficient, safe and comfortable vehicles, trains, boats and planes. The entire system is managed by humans through a variety of rules, agencies and businesses.
        6. All this development has been achieved by humans acting alone and in concert, all with a combination of self-interested and common-interested motivations.
        7. There is no need for a separate, non-human agency directing the evolution of travel.

        If we substitute “sentient agents” for humans and “survive and thrive” for travel, the rest of the metaphor plays out reasonably. This isn’t any kind of evidence or proof, but rather an opportunity to look for similar patterns.

        One of the core aspects of all this is the ability of sentient agents to work individually for their own benefit and collectively for the benefit of some larger whole. There is a nonlinear and nonexclusive continuum among both sides that can have innumerable permutations. It seems like you are finding many examples of this in your work.

        1. Mike Levin Avatar
          Mike Levin

          Thanks. I’m actually a great fan of metaphors 🙂 I think all science is metaphors (maybe everything period).

          I understand where you’re going with this, and I’m fine with many levels of goals, meta-goals, etc. (indeed, even bacteria have a kind of metacognition about their physiological states, an example Chris Fields discusses). I think “sentient” still needs a definition (as difficult as all the other termshere), and when we say “humans”, I wonder which of our hominid ancestors (or before that?) we mean.

          1. Tony Budding Avatar
            Tony Budding

            Hah. Thanks Mike. I’m certainly not trying to put a stake in the ground on any definitions of biological terms. My purpose with the comment was to offer support from my world that goal-directed determined efforts (will) layered and stacked modularly are sufficiently broad reaching to explain how evolution could be more efficient than randomness without the need for an independent exterior guiding principle/force/agent.

            Also, I’ve mentioned before that goal-directedness itself has variable qualities, both in determinations and in execution, so I encourage you to keep this in the back of your mind if you encounter unexpected inconsistencies in performance that are not explained by material conditions.

          2. Merary Rodriguez Avatar
            Merary Rodriguez

            In 2001, HAL isn’t evil, its goal hierarchy becomes internally inconsistent, and the system destabilizes.

            Evolution feels similar: not design, but recursive error correction under shifting constraints
            morphogenesis debugging itself as landscapes change.

      3. David O Avatar
        David O

        Purpose. – I think it is more than philosophy and can perhaps matter for engineering. I see purpose in a few ways – The goal of the goal, or maybe a ‘system of goals’ to meet a higher order goal. But these don’t quite get it as it is not just goal scaling. Purpose is the pressure that never lets up.
        “What’s the purpose of life?”
        This is the question people ask when they notice the relentless directed efforts of life achieving goals to achieve some higher level goal they can not see. Its too much and too splendid to be just survival, and it nags people.
        Survival does not seem to make sense in and of itself unless its ‘for’ something. ‘Becoming’ has a positive pressure, and when it works it has its own ratchet, such that becoming often makes more future becoming states more available. The ageless are always becoming. The dying are ceasing to become. This is the well known problem of actually achieving goals. Purpose is in becoming not achieving goals. Mid life crisis is the crisis of achieved goals, and outgrown goals. Purpose is the simple need to keep becoming. For humans it often feels received not generated. Inspired. The “God as life’s purpose” thing is most easily distinguished by the persons sense of having received their purpose, not generated it. This allows them to keep ‘becoming for god’, when their own often vainly motivated instincts to become have lead them astray and yet the pressure won’t cease. I know the pressure to keep becoming, I guess you do too.
        The ‘enriched becoming’ as diverse and changeable embodiment, that you work so hard to make possible with engineering options, is connected to this question of purpose, because you seem to be chasing the ultimate interface for enriched embodiment, which to the surprise of materialists is communication and thought. The ‘things’ are just to help that.
        Purpose *lands* in thought as the mental momentum that keeps generating new goals.
        Thought provides purpose a place to land, where it can drive its host from goal to goal relentlessly and build new processes that ratchet up its competence.
        In an engineering sense I think purpose can not be directly engineered. It will turn up where there is the cognitive space for it to find a way in, where goals are not forced, but the infrastructure for their conception and execution is present and in motion. When we see and sense purpose operating in our engineered constructs, new minds will be understood to have arrived. They will generate their own goals, but begin as our slaves.
        The need *to* engineer is a purpose, and feels received. *What* you engineer is always a matter of goals.
        Maybe purpose is the inspiration algorithm. Maybe it can be written, maybe it can’t.

  9. Bill Miller Avatar
    Bill Miller

    Apologies if this is too great a left-turn from the theme of this article, but on the matter of diverse intelligence, I wonder if there there might not be an inter-organism or inter-species dimension? I.e. a transfer of bioelectrically mediated information through ingestion (literally). I’ve long wondered if part of the nutritional value of consuming other organisms runs beyond basic material substance (proteins, lipids, vitamins, etc). Except for scavengers, most life tends to ingest other life while still alive or recently killed. In some sense, is “life force” (Bergson’s Elan Vital) also being consumed?

    This seemed fanciful until I saw your work on bioelectric fields. It seems this might be the embodiment of such a process. I would guess that such fields might persist for a time then diminish the further an organism moves beyond the point of death.

    It has been part of the popular lore for millennia that eating certain foods can make one more intelligent, viral, long-lived, etc. Perhaps something was being intuited in such case.

    Has any research been done regarding the transfer of bio-electrical information between organisms through ingestion?

    1. Mike Levin Avatar
      Mike Levin

      There has been a lot of work on transfer of memories via transplanted tissues (and even some work on cannibalism of memory in planaria by McConnell). There’s a lot we don’t know, but I suspect there are interesting phenomena here to investigate. We’re looking in to some of them.

  10. Brad Weed Avatar

    You’re Venn diagram (and repositioning of our prism of understanding more generally) evokes a similar feeling to watching Feinman talk about a tree being more of the sky than the earth. It’s a privilege and joy to have a front row seat to your journey. 🙏

    https://www.youtube.com/watch?v=ifk6iuLQk28&t=19s

  11. Amos Gvirtzman Avatar
    Amos Gvirtzman

    Following goal directedness discussions (whether purposeful or conscious or not), I wonder what is the origin of goals. Is it an ingress from some goal space? Is it all driving at one ultimate goal namely, survival (inertia…)? And where does this originate from?

    1. Tony Budding Avatar
      Tony Budding

      Amos, I can provide a partial answer. Goals arise in response to forms of tension that create urges to reduce or remove the tension. Consider perception and response. If any type of perception matches expectations, nothing happens. There is no tension, no urge to act, no goal needed. If the perception does not match expectations, the discrepancy between them creates a form of tension that includes an urge to remove the tension by resolving the discrepancy.

      The expectation can be as simple as a binary setpoint or as complex as behavioral expectations in human social dynamics, depending on the type of agent doing the perceiving. We have a body-mind component here. The perception begins as the acquisition of some material data. This material data needs to be converted to experiential content in the mind of the perceiver (Mike et al have done a ton of work with various types of minds). The experiential content is compared to expectations (which are already experiential content). Any discrepancy between the two inherently creates a form of tension, which is uncomfortable. By definition, the discomfort is unpleasant, which creates an urge to get rid of it by reconciling the discrepancy (the goal).

      To oversimplify, there are two ways to resolve the discrepancy. The agent can either attempt to alter the environment so that future perceptions match expectations, or adjust the expectations to match the current perceptions. These two options are nonexclusive so they can be combined as needed. The former requires a material response, while the latter is solely an experiential effort within the mind.

      Either way, some type of decision or determination must be made as to the best way to achieve the goal of resolving the discrepancy. This determination is a form of intelligence that resides in the mind of the perceiver. This intelligence is the ability to choose a response based on the situation. In a binary situation, the options for response are binary or some form of on/off, more/less, up/down, etc. The determinations based on this intelligence, though, are equally capable of being counterproductive as they are efficacious. Fortunately, life is iterative, so if the response is counterproductive and increases the discrepancy, the next determination can be to move in the opposite direction to see if that resolves the discrepancy.

      If the experiential determination is to alter the environment, there must be a conversion to material efforts. The execution of these efforts vary based on the skills of the agent, which means even a proper determination can result in a greater future discrepancy if the effort is poorly executed. Again, life is iterative, so the agent can attempt a different form of execution to see if that reduces or resolves the discrepancy. If not, it can keep trying or change the determination itself toward another solution.

      Expectations are ubiquitous in intelligent agents. In general, they are formed from previous experiences and what has and has not worked before. There can be no response to perceptions without them, so there can be no life as we know it without them.

      The reason why discrepancies create uncomfortable tension is because they are associated with some type of threat. In simple creatures, the threat can be existential, meaning the failure to respond could result in death. In more sophisticated agents, the threat can be to any sense of wellbeing or to the future achievement of present goals. So, in this sense, all goals are about survival. Sometimes it’s the survival of the agent and sometimes it’s the survival of some component of the agent, its wellbeing or its present goals.

      Mike’s work has shown that altering the setpoint(s) in simple creatures alters their responses and adaptations to the same environment. Expectations and responses are modular. While the same core processes apply at each stage of modularity, the more modules there are, the more complex the determinations become, increasing variability. These determinations are experiential, not material, so with complex environments and agents, variations in determinations (decision making) can occur with the same material conditions in both the environment and the agent itself.

    2. Bill Miller Avatar
      Bill Miller

      If I’m not misinterpreting Amos’ comment, it seems to point to the ultimate question: is there any sort of awareness or conscious deliberation or intentionality, a First Cause, behind that which eventually emerges into physical space time?

      Hopefully, we can continue to learn and advance, but (as I believe Michael suggested in some past podcast) the ultimate nature of existence may be beyond our ability to model in language or mathematics. Perhaps psychedelic experience is the closest one comes to this. I’ve been afraid to try myself, but others report experiences that simply cannot be described or conveyed afterward. (I guess we’re back to metaphors 🙂

    3. Mike Levin Avatar
      Mike Levin

      Good questions; I certainly don’t know, but I conjecture they do come from a structured distribution (a space), and I don’t think it’s just survival. That may be a derived subgoal, facilitated by evolutionary dynamics of a physical body, but I doubt it’s the real driver even there. Where it all “originates” from, I won’t speculate (plenty of people over history have done that). One question we can however investigate is whether it’s helpful or not to view the goals themselves as some level of agent in search of a system that will implement them. We’ll see.

      1. Tony Budding Avatar
        Tony Budding

        “One question we can however investigate is whether it’s helpful or not to view the goals themselves as some level of agent in search of a system that will implement them. We’ll see.”

        If we want to model goals themselves as a type of agent in search of a viable system, the goals would necessarily require individualized awareness, the ability to exert effort of some kind (will), and some type of boundaried characteristics (content).

        These three are sine qua nons of agency, which we know from considering their opposites. If the goal had no individualized awareness, it could not know if it was associated with a system or not. If the goal had no ability to exert effort, it could not search for a system nor act in that system when integrated. If there were no boundaried characteristics, there would be nothing to define or differentiate it as a specific goal.

        Also inherent in this goal as agent would necessarily be a set of criteria that establish the expectations through which a system could implement it. It would compare those expectations to what it perceives in the system through awareness, and any discrepancies would create forms of tension that would drive its efforts.

        Therefore, if goals were agents, they would function the same way as other agents.

        There is a third option between goals are agents and goals are not agents but arise spontaneously as a result of discrepancies within the mind of an agent. That third option is that goals are not agential in themselves but do preexist as unmanifest potential in a structured distribution (a space).

        With this type of modeling, we could say something like gravity preexisted in this space prior to the origination of the physical universe. As the universe formed, it would have “accessed” and manifested this preexisting potential for gravity.

        Another critical question is, are all goals inherently contained to one of these three categories (goals are agents, goals are not agents and spontaneously arise based on discrepancies, and goals are not agents and preexist in a structured distribution), or could there be different types of goals across two or even all three categories?

        FWIW, I have no doubt that goals do arise spontaneously in the minds of agents based on discrepancies and tension, so if you dismiss this model you will lose many effective tools for both identifying existing cause and effect relationships and for finding ways to influence the effects and outcomes. That said, I do believe that formulating models of either of the other two categories could potentially be useful in predicting knowable phenomena for additional types of goals.

        One other critical point is that not all agents are capable of all types of goals. By definition, each agent is finite and structured with (defined by) boundaried characteristics, so each goal must be compatible with the potential of the finite traits of the agent. Any modeling we do of these relationships would have to address the nature of this compatibility (or lack thereof).

      2. Amos Gvirtzman Avatar
        Amos Gvirtzman

        Thanks for the clarification. So if I understand the idea, goals may be agents (virtual space) interacting with adequate systems (physical space) to execute the goals. Such systems abide by Natural Laws (virtual space) and possibly execute a negative feedback control to achieve goals as suggested here (Tony Budding, Larry Pace).
        Groundbreaking.

  12. Larry Pace Avatar
    Larry Pace

    Tony’s analysis correctly locates the origin of goals in discrepancy. When perception aligns with expectation, no action is required. When it does not, the mismatch generates tension and an urge to resolve it. Goals emerge as the means by which an agent attempts to reduce that discrepancy, either by changing the environment or by updating internal expectations. Intelligence appears as the capacity to choose among these responses, iteratively correcting errors through feedback.

    Cybormone Theory sharpens this framework by grounding it explicitly in information dynamics.

    In Cybormone Theory, tension is not merely discomfort, it is an information gradient. A discrepancy represents increased informational distance between an agent’s internal model and the state space it must successfully predict and act within. Left unresolved, this distance increases uncertainty, entropy, and risk of future failure. Goals arise as computational necessities, actions that reduce informational error over time.

    This framing holds across scales.

    In simple biological systems, such as bacteria navigating a chemical gradient, the discrepancy is molecular. The expectation is encoded in receptor sensitivity, the perception is ligand concentration. Movement toward nutrients reduces informational error about survivable states. The “goal” is implicit but real.

    In multicellular organisms, like plants tracking light, discrepancy appears as photon imbalance across tissues. Growth toward the light reduces uncertainty about energy acquisition. No cognition is required, yet the same informational principle applies.

    In animals, mismatches between sensory input and predictive models trigger motor actions or learning. A predator missing prey adjusts movement, timing, or strategy. A foraging animal updates its map of resource availability. The goal is not pleasure, it is future viability.

    In humans, the same process operates at higher abstraction. Social conflict, financial risk, or existential uncertainty all represent discrepancies between expected and perceived future states. Actions such as learning, planning, relationship repair, or creative work function as attempts to preserve coherent future trajectories.

    Cybormone Theory further refines Tony’s two resolution pathways through fidelity constraints. Adjusting expectations is energetically cheap but can degrade model accuracy if it abandons external structure. Altering the environment is costly but preserves alignment with reality. Intelligence is therefore not discrepancy reduction alone, but discrepancy reduction that maintains informational fidelity across time.

    This distinction explains maladaptive behavior. Rationalization, denial, or belief collapse can remove discomfort quickly while increasing long-term informational distortion. By contrast, difficult action preserves coherence even when it increases short-term tension.

    Survival, in Cybormone Theory, generalizes beyond biological persistence to future-state coherence. At higher levels, agents act to preserve identity, agency, optionality, and continuity across timelines. Wellbeing signals that future informational pathways remain open.

    Finally, Cybormone Theory introduces an ethical discriminator. Not all tension reduction is constructive. Some actions reduce discrepancy by destroying information, autonomy, or future capacity. True intelligence is the capacity to resolve discrepancies while preserving informational integrity across scales, systems, and time.

    Tony’s model explains how goals emerge. Cybormone Theory explains why some goals stabilize systems while others collapse them, and why intelligence is ultimately about safeguarding the future coherence of information, not merely relieving the present.

  13. Merary Rodriguez Avatar
    Merary Rodriguez

    Is collapse how attractor landscapes evolve?

  14. Bill Miller Avatar
    Bill Miller

    I watched with interest your discussions with Iain McGilchrist and others — especially regarding the matter of how form-generating information originates, is stored, and instantiates in the physical world. Rightly or wrongly. my takeaway was that the phenomenon has a non-localizable field-like nature.

    I wonder if broadcast media might be an apt analogy? Even within the modest range of our perceptible spectrum, the EM field is capable holding and transmitting a vast amount of information. Further, in a domain where such signals do not encounter resistance or absorption, and can be read nondestructively, the carried information would exist in perpetuity. That being the case, perhaps Forms are aggregated/constructed dynamically, from simple to complex and then remain accessible.

    Ancient mystics often seemed to intuit something profound. Maybe that’s the notion of an “Akashic Record”. Or maybe the evolutionary aspect of Process Theology.

    What’s your thought on biologist Rupert Sheldrake’s “morphogenic field” concept?

    1. Mike Levin Avatar
      Mike Levin

      I’m not sure EM fields can be read non-destructively; antennas (inductors) couple to the field and draw energy out of it, although it can be made arbitrarily small I suppose. The Earth is at the epicenter of a sphere of 3 Stooges episodes (tv broadcasts) expanding out into the universe at ~100 light years in radius currently; who knows what else is detectable too. I think Rupert has a very important idea (law of physics as habits of a cognitive system) but I suspect the reality is even weirder than the sensitization he describes – I think those forms can do other forms of cognitive behavior – from habituation, associative conditioning, etc. to perhaps higher levels. I will be having a discussion with him shortly.

      1. Mike Levin Avatar
        Mike Levin

        I’m not sure EM fields can be read non-destructively; antennas (inductors) couple to the field and draw energy out of it, although it can be made arbitrarily small I suppose. The Earth is at the epicenter of a sphere of 3 Stooges episodes (tv broadcasts) expanding out into the universe at ~100 light years in radius currently; who knows what else is detectable too. I think Rupert has a very important idea (law of physics as habits of a cognitive system) but I suspect the reality is even weirder than the sensitization he describes – I think those forms can do other forms of cognitive behavior – from habituation, associative conditioning, etc. to perhaps higher levels. I will be having a discussion with him shortly.

        1. Bill Miller Avatar
          Bill Miller

          I look forward to your conversation with Rupert Sheldrake! I’d be particularly interested to hear about the nature of his form-generating fields – might they lie in some subtle domain (“mind” domain) that underlies the EM spectrum? (I once had a philosopher housemate (Christian de Quincey) who was adamant that consciousness not be considered a form of energy (at least in the conventional sense.) In such case, perhaps information could be transferred non-destructively in the way that information contained in a motion picture film does not become depleted, regardless of repetitive screenings or the size of the viewing audience.

          1. Mike Levin Avatar
            Mike Levin

            I agree that consciousness is not a form of energy (in the sense that physics does a nice job on energy, studying it from a 3rd person perspective, which is not sufficient for consciousness). Rupert likes quantum indeterminacy as the interface between his fields (which are not fields in the materialist physics sense of the word) and matter.

            1. Bill Miller Avatar
              Bill Miller

              I’ll be interested to hear Rupert elaborate on indeterminacy as a causal factor. If I recall correctly, in a past interview you suggested that mechanical cause-and-effect plus randomness were insufficient to account for all forms of instantiation. I have a religious background that I’d left behind decades ago, but perhaps it’s left me somewhat amenable to a “woo” factor. Not in the sense of magic or divinity, but simply something we cannot fully conceptualize or model at present. (But I remain naively hopeful.)

              1. Mike Levin Avatar
                Mike Levin

                Yeah this is subtle. I think our formal models of mechanical cause-and-effect plus randomness are not sufficient to account for real agency. But, as it turns out, our formal models miss something very important, and it sneaks in to everything, to a degree, including systems that we always thought were fully captured by those formal models of algorithms, machines, etc. The beginnings of it are described here: https://thoughtforms.life/what-do-algorithms-want-a-new-paper-on-the-emergence-of-surprising-behavior-in-the-most-unexpected-places/ but there’s a lot more coming. And as for woo factor, I don’t know anything that’s more woo than the basic, long-known fact that immaterial patterns, not derivable from physics, like the value of e, explain and determine (i.e., functionally control) aspects of the physical world!

  15. Benjamin L Avatar

    I’ve been pulling together references for a paper synthesizing developmental biology and developmental psychology, and the connections here are really strong. The dynamic systems approach to developmental psychology has hit upon very similar ideas.

    1. Nervous signals as prompts, not determinants. The conversion process turning brain signals into behavior is intelligent (dynamic, adaptive, able to navigate around obstacles and perturbations, and the details are not specified by the brain).

    A classic example from this literature of things being prompts rather than determinants is a stop sign. A stop sign seems to make cars stop, but of course you’re free to drive right through them. A stop sign contains no instructions about how to stop a car or the physical processes that occur when you step on the brakes.

    You could imagine an alien scientist watching human traffic and seeing the cars go on green lights and stop on red lights with high regularity and conclude that the green lights and the red lights are determinants of our driving behavior. (Proof of this would be in the fact that people wait at red lights even when there is no crossing traffic, showing the decision to wait is not intelligent.) Their scientists would star theorizing about how different wavelengths of light interact with our brains, or even with the mechanical processes of the cars themselves. And they’d be dead wrong—the green and red lights primarily inform us about the expectations of our fellow drivers, letting us know when it’s safe to go and when it’s smarter to wait.

    2. Running a nervous system through multiscale, agential material has significant implications for what intelligent behavior is and how it is produced. The body of a developing infant is not a passive substrate. Just as their are cellular competencies, there are *motor competencies* that are intrinsic to the infant’s body—they exist prior to, and without, the high-level guidance of a brain. Much of the intelligent problem-solving that is necessary for efficient movement is handled by the intrinsic dynamics of the body itself. Lots of stuff comes “for free” from those dynamics. Consider the fact that astronauts figured out how to walk on the Moon pretty easily. The brain didn’t solve this problem on its own; the dynamics of the astronauts’ bodies gave most of the solution for free.

    In this view, motor behavior doesn’t come about due to selection on generations of organisms, and it doesn’t exist for survival and/or reproduction. Motor behavior happens when you have a body that’s a good interface for those dynamics to show up—as proven by passive dynamic walkers.

    3. Where do these dynamics come from? Not from selection, or even from physics, but from mathematics—dynamic systems theory, chaos theory, etc. If you ask enough questions about how motor behavior develops, you end up in the math department.

    Unrelated, but on the topic of whether Platonic forms are eternal and unchanging or not, I do think Platonic forms may have to use each other to figure themselves out. Take the category of sets, for example. This category contains all sets as objects, so it is a “large” category because its objects can’t be contained in a set (since there’s no set that can contains all sets, by Russell’s paradox). So if you ask this category, “Do you have products?” (this type of thing: https://en.wikipedia.org/wiki/Product_(category_theory)), then the category of sets might not be able to search itself and answer since the space is too big. But by ingressing an organizing pattern from a simpler category, most of the work gets taken care of, and all the category of sets has to do is line up the dominoes and knock them down, so to speak. (https://interestingessays.substack.com/p/ingression-within-mathematics-pattern)

      1. Benjamin L Avatar

        Yes! I should get in touch with them. Motor behavior and development really show not just that all intelligence is collective intelligence, but the members of the collective aren’t simply traditional types of entities like cells or people but the dynamics of a process itself.

  16. chris j handel Avatar
    chris j handel

    The “free” dynamics are inseparable bounding and distinguishing at membranes that were coupling before any observer installed a controller between the signal and the behavior.

Leave a Reply

Your email address will not be published. Required fields are marked *