Game Theory Meets Morphogenesis: the Physarum Dilemma

Published by

on

There are many formal models that try to simulate competitive and cooperative dynamics; Prisoner’s Dilemma (PD) is one such popular example, being often referenced in game theory, economics, and evolutionary biology. I wanted to explore the intersection of these ideas with morphogenesis: the obvious (but still poorly-understood) alignment of cells toward a goal in anatomical morphospace: a specific large-scale outcome which the collective will pursue despite a wide range of perturbations and barriers. Thus, Lakshwin Shreesha and I set out to see if game theory models of rational agents making decisions could be harnessed to explain and control morphogenesis.

We used simulations of iterated, spatialized PD – each agent on a 2D grid plays against its local neighbors, repeatedly. But, there’s one key thing missing from the standard formalism: in typical game theory models, there is a fixed set of players. That is, there is a set of agents and a payoff table for the various actions, but there is one level of agency and their actions cannot change this basic structure. Life is not like that; part of the magic of material we call “alive” is that it can bind together into larger-scale systems (expanding the cognitive light cone of parts and projecting it into new problem spaces, such as the move from the physiological space that cells navigate to anatomical morphospace) or fragment (dissociate and scale down the size of the Self – as in the dissociative identity disorder of multicellularity we call cancer). But this ability to scale the Self dynamically fundamentally changes how game theory modeling works.

Consider the slime mold Physarum: (video below made by my former post-doc Nirosha Murugan for this paper)

Let’s say it begins to elongate toward a food target. Modeling this decision is simple – “go get the food”. But then an experimenter comes along and cuts the leading 10% of it off from the rest (see here for some actual data on this, which was part of a home-school study unit I did with my kids). Now something very interesting happens. The separated little blob has a decision to make: I can go get the food for myself, and not share it with the rest of the mass (huge energy density win for me!), or, I can move backwards and re-merge with the rest and then “we” will go get the food. Of course I’m not claiming the Physarum is symbolically or linguistically having those thoughts, this is just a way of framing the possible options and their adaptive payoffs as we study the evolutionary implications of different behavioral strategies. The key thing here is that if it were to re-merge, the selfish question of “grab the food and not share it” becomes meaningless – the system will be a syncitium and make decisions as a whole. Such calculus is only relevant while there is a separate agent that can be the subject of this payoff table. So, what is happening here is that the actions of an individual actually change how many individuals there will be – a very meta aspect, because the payoff table (and the number of entries in it) is actually plastic, and shifts dynamically during the simulation. Based on what you do, you may or may not exist in the future, so your relationship with Future You is radically altered. To my knowledge this has not been explored before (the closest thing I’ve been able to find, to such dynamic payoff tables, is hyperbolic discounting).

In order to enable such complex feedback between decisions and the number of agents able to make decisions, we modified PD: agents can now cooperate or defect, on each turn, but they can also Merge (with a neighbor) or Split. Now this more closely mimics the dynamic spectrum between multicellularity and cancer. All of the details and data are in this paper, summarizing the project implemented by Lakshwin Shreesha, Federico Pigozzi, and Adam Goldstein in my group. Here’s the Abstract:

Evolutionary developmental biology, biomedicine, neuroscience, and many aspects of the social sciences are impacted by insight into forces that facilitate the merging of active subunits into an emergent collective. The dynamics of interaction between agents are often studied in game theory, such as the popular Prisoner’s Dilemma (PD) paradigm, but the impact of these models on higher scales of organization, and their contributions to questions of how agents distinguish borders between themselves and the outside world, are not clear. Here we applied a spatialized, iterated PD model to understand the dynamics of the formation of large-scale tissues (colonies that act as one) out of single cell agents. In particular, we broke a standard assumption of PD: instead of a fixed number of players which can Cooperate or Defect on each round, we let the borders of individuality remain fluid, enabling agents to also Merge or Split. The consequences of enabling agents’ actions to change the number of agents in the world result in non-linear dynamics that are not known in advance: would higher-level (composite) individuals emerge? We characterized changes in collective formation as a function of memory size of the subunits. Our results show that when the number of agents is determined by the agents’ behavior, PD dynamics favor multicellularity, including the emergence of structured cell-groups, eventually leading to one single fully-merged tissue. These larger agents were found to have higher causal emergence than smaller ones. Moreover, we observed different spatial distributions of merged connectivity vs. of similar behavioral propensities, revealing that rich but distinct structures can coexist at the level of physical structure and the space of behavioral propensities. These dynamics raise a number of interesting and deep questions about decision-making in a self-modifying system that transitions from a metabolic to a morphological problem space, and how collective intelligences emerge, scale, and pattern.

Watch a talk here, that explains in detail what we did and what we found:

And, you can play with the simulations yourself, here:

https://lksshw.github.io

If you find anything interesting there, let us know! I will just mention a few key findings from our paper:

  • Over time, it appears this dynamic favors multicellularity – regions are formed, which get bigger. And that’s without any of the usual drivers that have been proposed to cause multicellularity (need to get bigger to avoid being eaten, etc.)
  • Remarkably, larger higher-level agents have greater causal emergence than smaller ones, suggesting a link between competition-driven multicellularity and integrated agency (see here for more on this topic), which might have significant implications for a feedback loop up-scaling intelligence in evolution.
  • These dynamics reveal the presence of not only structural features (actual boundaries between merged cells) but also physiological/behaviora/cognitive domains that do not respect (and cannot be inferred from!) the anatomical boundaries, suggesting this as a minimal model of origin and dynamics of non-physical patterns that are important targets in biomedicine, neurology, and diverse intelligence contexts.

There’s one other interesting issue to mention. We were initially puzzled by one thing: while the health of agents was steadily rising as the population developed bigger multicellular individuals, we observed a precipitous drop-off toward the end. Why? Remember that cells gain energy according to the payoff matrix of PD, and do very well as cooperative subunits of multicellular blobs. But when the blobs get big (and thus fewer in number), there are fewer others to play against, and thus, fewer and fewer opportunities to get reward! This raises a profound eschatological question about how to simulate the end of this kind of world. More broadly, what happens when a form of life and mind expands to the edges of its universe – when there is no one else to interact with because everything has merged into one pervasive being? I can think of 3 possible ways forward:

  1. everything dies – a sort of heat-death of the universe scenario, where the agent has consumed everything there is to consume, and thus dies.
  2. a cycle of fragmentation and unification – perhaps the boredom of being the only mind in a universe results in a (possibly traumatic!) fragmentation – like a human mind under great stress splitting up into personalities. (See this concept discussed by Bernardo Kastrup and Rupert Spira – that we are all fragmented alters produced by a dissociative identity process from a great cosmic universal mind). This could then lead to progressive cycles of fragmentation and unification, over and over again (a kind of Breaths of Brahma or bouncing universe model).
  3. breaking through into a new space – perhaps, as happens with cells gaining access to anatomical morphospace by networking into multicellularity, an agent that has achieved sufficient unity (and sufficient causal emergence) can then exert its efforts into an entirely new space, beginning the cycle of exploration (and possibly of unification with other agents who may already be there).

Future work will explore all of these questions, and link the models more tightly to biomedically-relevant policies for managing the merge-split decisions of real cells and multicellular components, as well as for detecting and reprogramming the non-anatomical, subtle patterns (of energy, information, alignment, stress, etc.) that may guide health and disease.


Featured Image by Midjourney.

32 responses to “Game Theory Meets Morphogenesis: the Physarum Dilemma”

  1. James of Seattle Avatar
    James of Seattle

    This is so cool.

    First thought: have you read Tchaikovsky’s book “Alien Clay”? He takes this idea up to organisms combining and de-combining.

    Second thought: Isn’t AI, and eventually artificial life, an example of “breaking into a new space”? What new space will the robots break into after colonizing the galaxy?

    Third thought: Does Cronin’s and Walker’s Assembly Theory provide a driving force for exploring new spaces?

    *

    1. Mike Levin Avatar
      Mike Levin

      I have not read it – I’ll check it out. Yes, AI’s are going to be breaking into new spaces while we sit here looking at them not trundling around on wheels/legs and saying “they are disembodied and thus not real agents”, unless we crank our progress on the whole diverse intelligence research program. Possibly Assembly Theory helps here, I’m not sure – I don’t have enough expertise in it.

  2. Graham Lawrence Avatar
    Graham Lawrence

    I probably have no right to comment as I am not qualified to do so. I am just stunned that starting from being interested in one or two videos on YouTube, I have passed some sort of philosophical thresholds and arrived at some kind of potential appreciation of how we must have so incredibly little idea of how our species (or the range of types of species we become part of) will, in say 300 years’ time, view or understand or speculate about things that probably seem undeniably “straightforward” to most people: the relationship between subordinate parts of the personality, and of the body, and between “the individual” (the individual what?!) and “the group” (at various levels), and the nature of concepts of cognition, intelligence, patterns, consciousness, and purpose, that will be beyond even the dreams of the great mystics (as well as scientists) of the past. I am signed up as a student at the Peterson Academy and I would so love to have Dr Levin put some video courses on there, although I appreciate this seems unlikely in view of present workload commitments. In the meantime, boy am I appreciating the universe, and seeing the expression of life and cognition through e.g. the leaves on the trees! Thank you.

    1. Mike Levin Avatar
      Mike Levin

      I don’t know what the Peterson Academy is, but my material is free and available to all, here on this blog, and at
      https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
      https://thoughtforms-life.aipodcast.ing/
      https://drmichaellevin.org/presentations/
      https://drmichaellevin.org/publications/

    2. Doina Contescu Avatar
      Doina Contescu

      Putting Dr. Levin and Jordan Peterson in the same universe is not just a nauseating insult to science, it shows an appalling lack of intellectual discrimination. I had Peterson as a professor at Harvard in the ’90s. He didn’t earn tenure; he left because his academic work simply didn’t warrant the recognition or permanence reserved for actual scholars. That was before he reinvented himself as a loudmouth demagogue peddling grievance and snake oil to the angry and uncritical, abandoning any pretense of contributing real knowledge.

      You are talking about Dr. Levin, a scientist whose breakthroughs may very well put him on a Nobel stage, vs. a mentally-unhinged Peterson, a disgraced ex-academic who long ago chose to monetize conspiracy, resentment, and intellectual vandalism over any contribution to genuine learning. There is no comparison. Peterson’s “Academy” is an unaccredited, cult-like echo chamber that churns out dogma and monetized outrage while squashing dissent, refusing dialogue, and surrounding itself with sycophantic mediocrity.

      To tie Dr. Levin’s name to that intellectual landfill is not just ignorant, it’s a slap in the face to anyone who actually values discovery, rigor, and truth. If you can look at the chasm between world-changing science and this circus of weaponized ignorance and still think they belong in the same conversation you’re not just confused, you’re actively participating in the dumbing down of public discourse. I usually don’t boss people around, but this is such a willful disregard for the difference between rigorous science and opportunistic exploitation that I suggest you edit your comment to exclude that mention.

      1. Mike Levin Avatar
        Mike Levin

        ah, now I think I understand what the Peterson Academy is, I had no clue 🙂

      2. Rik Lubking Avatar
        Rik Lubking

        Note that Peterson as a former academic and clinical psychologist is culturally and politically relevant because he’s spoken out against certain ideologies, including those popular in academia. Specifically the social sciences, which he argues have been captured by activism (see also gender ideology and grievance studies). He’s also critical of the way universities are run. After the success of his own online lecture series, he now seeks to offer an affordable, high-quality online alternative to university for the masses via his Peterson Academy, and seeks out scholars to produce lectures with the help of his production company.

        So, I wouldn’t accept any such review of his “Peterson Academy”, since reviewers may conflate their ideological objections with criticism of his professional and academic credentials (as is clearly the case here).

        Peterson is an accomplished clinical psychologist and academic, and an expert on psychology and psychiatry. He’s also aware of active inference, and tries to integrate Jungian archetypes and religious/mythological motifs into his thinking. If you’re looking for a place where non-orthodox views get a fair hearing, with a professional production company to help you create lectures, Peterson Academy may be interesting to you.

        Look up Peterson Academy yourself and make up your own mind, I’d say.

  3. Abhishek Singh Avatar

    I think merging and splitting can be viewed as a special case of cooperation and defection in game theory. What do you think of combinatorial complexity argument that unified agents are more capable of producing new category of things (open a new space of possibilities as you said). If that happens then we will never run out of problems and solutions space in fact they will only increase in number as the latent structure inhabited by the generator organism gets more rich. For example – an AI image generator can generate more realistic images than number of atoms in the universe.

    1. Mike Levin Avatar
      Mike Levin

      > I think merging and splitting can be viewed as a special case of cooperation and defection in game theory

      hmmm but in any specific implementation, we have to choose exactly what payoff merged (and split) individuals receive. That is, it’s a free parameter, the merge/split and cooperate/defect are not connected by any necessary rule, they are orthogonal. Thus I think it’s impossible to say they are a special case because if it were, there would be a specific relationship that would constrain our choice – we would simply know, from the definition of each, which special case of C/D is being called M/S. But they are not tied and can vary quite independently. And yes, I think the rest of your point is quite plausible.

  4. Michael Bluth Avatar
    Michael Bluth

    Michael, it seems that Physarum polycephalum actually is special in that it defies to play the prisomers dilemma game because as you confirmed experimentally the underlying algorhythm is preserved even if segregation appears artificially, it is in fact always only one Player and allows segregation only if multiple food sources are available further optimizing bio mass.

    Doesn’ t the slime molt in fact for this very reason somewhat contradict Dawkin’s theory of egoistic genes in that the genetic material that is introduced into spores is not favored in subsequent reproductive cycles?

    1. Mike Levin Avatar
      Mike Levin

      Our Physarum work on this needs a lot more experiments before anything definitive can be said, but that does appear likely to be the case.

      As for Dawkins’ framework, it is a specific perspective – I doubt it can be falsified by any discrete observations because, like reductionism, you can *always* choose to tell a gene-level (or atom-level) story about anything that happened. The bigger picture question though is: does that perspective enable optimal understanding, prediction, control, and invention, for the future? I think no – there are many examples (discussed by Denis Noble, myself, and numerous others) which are simply not tractable from that viewpoint. The selfish gene idea is useful for some aspects of the life sciences, but I think it closes off massive areas of discovery, it’s a very limited perspective.

  5. John Shearing Avatar

    >More broadly, what happens when a form of life and mind expands to the edges of its universe?<
    The question feels perverse to me but is redeemed by option three.
    The universe has no edges.
    At all times, all three ways forward are active for each and every one of us. And infinitely many more as well. Frightened, shortage conscious beings will find themselves on paths one and two. Those with love in their hearts and faith in infinite creation will find themselves enjoying the blessings of infinite growth and infinite community, even when they would appear to be alone and cut off.

  6. Benjamin L Avatar

    > The key thing here is that if it were to re-merge, the selfish question of “grab the food and not share it” becomes meaningless – the system will be a syncitium and make decisions as a whole.

    The economy seems to strike a balance here: an individual member of the economy’s decisions will be influenced by everyone else’s demands for the food or other resources so that they will end up being prosocial in practice, but the decision will be experienced as self-interested by the individual member—they won’t *notice* the influences they receive from everyone else through the price system because the memories are anonymized.

  7. Rik Lubking Avatar
    Rik Lubking

    Have you considered that giving people such mental models may allow them to perceive and interact with reality differently, which, if yet undiscovered domains of self-organising systems exist, may open up vectors for migration?

    Which would include disease and parasitism, not to mention rapid change to colonise and adapt to novel environments.

    Have you considered precautions and safety measures?

    1. Mike Levin Avatar
      Mike Levin

      I’ve been talking about this a lot – when we make things (AI’s, biobots, internet of things, etc.) we make interfaces to new regions of the Platonic Space which may include patterns no one has ever seen before. It is imperative to study this space and understand its structure, to formulate safety strategies. Unfortunately, like with the study of viruses and bacteria, not doing the research and having a kind of head-in-the-sand strategy doesn’t work – the only way to effective precautions is a good understanding of the relationship between interface and the patterns they enable, otherwise we’re stumbling into these issues even when not making anything we *think* isn’t an unusual interface.

      1. Rik Lubking Avatar
        Rik Lubking

        Hi Michael,

        Thanks for the response.

        I appreciate that, and yes I agree that it’s important that we investigate, especially considering technologies like social media, LLM’s and AI creating landscapes of novel niches, and potentially self-organising systems.

        I was more implying memetics though, as in your work/ideas generalising to consciousness and cognition. It’s one thing to create and colonise new technological niches that didn’t already have self-organising systems, it’s another to make people question where their thoughts come from and whether their environment and the groups they’re a part of may be intelligent, alive or conscious somehow.

        You’ve said that you’re already receiving concerning e-mails from confused individuals. I worry that this may become worse if your work generalises to consciousness and cognition. It would seem wise to consider that possibility in advance.

        Thanks for your time and attention, I just wanted to make sure you were aware.

        Regards, Rik Lubking.

        1. Mike Levin Avatar
          Mike Levin

          thanks; I am considering it. The number of people destabilized by the current frameworks of neuroscience, evolution, and physics – which dissolve the the ancient groundings for agency but generally have not replaced them with a better foundation that addresses the crisis of meaning – has got to be orders of magnitude greater than the number of people who email me about my stuff…

  8. Benjamin L Avatar

    > More broadly, what happens when a form of life and mind expands to the edges of its universe – when there is no one else to interact with because everything has merged into one pervasive being? I can think of 3 possible ways forward:

    > 1. everything dies – a sort of heat-death of the universe scenario, where the agent has consumed everything there is to consume, and thus dies.

    I’ve been wondering about the relationship between entropy and collective intelligence. On the one hand, there’s the second law of thermodynamics, which says everything tends to fall apart. On the other hand, there’s Aumann’s Agreement Theorem, which says that things tend to come together. They’re opposites, but not equals—entropic forces clearly seem to win over the long run, while Aumannian influences often seem to be slow, short-ranged, easily blocked or disrupted, and quickly forgotten once disrupted.

    I feel like there could be some important connection between the second law of thermodynamics and Aumann’s agreement theorem that might help make sense of what the future could look like, but I don’t know what it is.

    1. Mike Levin Avatar
      Mike Levin

      cool. I don’t know Aumann’s Agreement Theorem, I’ll have to look it up. But I think we found another “force” (not really a force but I don’t know what else to call it – like entropy, what – a “universal tendency” or something?) that should potentiate a kind of intelligence spiral. Stay tuned, we’re working on writing something on this.

      1. Benjamin L Avatar

        Looking forward to it. Aumann’s Agreement Theorem says that rational actors cannot agree to disagree; I think it suggests that things will tend to form shared models, which I think means that things tend to become collective intelligences. But it doesn’t say anything about how strong that tendency is.

        https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem

      2. Rik Lubking Avatar
        Rik Lubking

        An archetypal dynamic, as in an abstract pattern that reoccurs whenever the required roles are present.

        Like waves, which may appear in different media.

        It’s essential (top-down, form and function), not fundamental (bottom-up, material and history), in that it doesn’t require any history to repeat itself, unlike say DNA, which is fundamental, not essential.

        Cheers

  9. Luke McNabb Avatar
    Luke McNabb

    It is somewhat off topic but what was the gentleman’s name that you had mentioned in a previous talk, that was using information theory to mathematically describe the levels of and amount of agency? Can’t seem to find anything about it…

    1. Mike Levin Avatar
      Mike Levin

      Take a look at the work of Giulio Tononi, Erik Hoel, and there are others.

  10. Larry Pace Avatar
    Larry Pace

    Information as a Driver of Integration
    “…without any of the usual drivers that have been proposed to cause multicellularity… [this model shows] the information and payoff dynamics themselves drive emergent complexity.”

    This insight directly affirms the central premise: the universe evolves not merely by responding to physical survival pressures, but by reducing the time, distance, and energy cost of accessing and integrating meaningful information. In this framework, information—not matter—is the primary evolutionary substrate, and recursive coherence is the means through which increasingly integrated systems optimize their engagement with it.

    1. Time-to-Information (speed-to-information as an analog) Reduction as Evolution’s True Driver
    This model proposes that across all scales—biological, cognitive, technological, and even cosmological—systems evolve to minimize the latency between a need-to-know and an ability-to-know. This principle, termed Time-to-Information Reduction (TIR), supersedes classical models of evolutionary advantage based solely on energy capture or threat avoidance.

    In your PD-based multicellularity simulation, the agents that merged into more cohesive structures were rewarded not by size or survival alone, but by their improved ability to navigate payoff dynamics, which are themselves informational mappings of interaction potential. These merged agents became better at predicting, modeling, and responding—hallmarks of improved information processing.

    Thus, multicellularity here is not just an adaptive response—it is a computational improvement, and its emergence mirrors the thesis that systems gain advantage when they can recursively compress and access relevant information across scales of integration.

    2. Recursive Coherence Over Material Competition
    Rather than raw material efficiency or ecological dominance, this perspective suggests that what gives a system enduring advantage is its internal alignment across recursive layers of behavior, memory, and ethics—I call it Recursive Coherence.

    In your model:

    The more coherent a colony (via Merge), the greater its capacity to store and use payoff memory.

    The memory itself became a recursive asset, enabling the system to optimize for future behavior by compressing and aligning past experience.

    This mirrors how some agents develop dimensional fidelity: by aligning intent, structure, and prediction across nested temporal frames, they increase both internal coherence and the system’s evolutionary stability.

    This coherence is not reducible to fitness, but instead reflects dimensional integration: systems that act as one across multiple axes of differentiation, while preserving internal heterogeneity. Multicellular emergence in your model, driven by payoff dynamics, reflects the same fidelity-based convergence seen in some higher-order decision agents.

    3. Ethical Modeling as a Byproduct of Causal Integration
    This model also emphasizes that ethics is not an overlay, but an emergent property of coherent information processing. As agents integrate, they must:

    Predict outcomes not just for themselves, but for parts of themselves (as once-independent agents).

    Manage intra-agent conflict across memories, preferences, and spatial scales.

    Develop policies (conscious or algorithmic) that simulate and test possible future states with respect to alignment.

    This is precisely what begins to emerge in your model when agents merge. The payoff dynamics force the group to develop a kind of meta-strategy—a higher-level behavioral policy—governing Merge, Split, and cooperation. In certain terms, this becomes a coherent probabilistic interface: a field through which aligned moral, energetic, and predictive functions emerge, recursively encoded in the agent’s internal decision logic.

    4. Implications for Multiscale Intelligence and Non-Biological Cognition
    Your model, when interpreted through this lens, demonstrates the universal applicability of informational drivers in the formation of cognitive agents. This reframes the origin of intelligence and agency not as contingent upon biology, but as arising anywhere recursive alignment and information compression generate internal state fidelity.

    Whether in cell colonies, artificial neural architectures, or cosmological systems, informational recursion drives emergence. Your observation that complexity arises “without any of the usual drivers” confirms this claim: informational coherence is sufficient—and perhaps necessary—for life and mind to evolve.

    Closing Reflection
    In sum, your spatialized, iterated PD model provides computational evidence that integration is a function of information alignment more than physical necessity. This aligns fully with my view of recursive convergence as the prime engine of evolution, where entities evolve toward greater ethical coherence, dimensional fidelity, and informational self-awareness.

    Your work presents not just a biological insight, but an ontological one: the universe rewards coherence not only with survival, but with agency—and ultimately, with meaning.

  11. Felipe C Argolo Avatar

    Awesome piece!

    Concerning the alternatives, No.#3 seems quite interesting as a research axis. Thinking along the lines of re-scaling and renormalization groups, it would be interesting to include different games for agents at different scales.
    That is, keeping the bottom-level agents when merging occurs and setting-up a 2-level game, where agents have individual policies and sets of agents (cells) also have policies.

    1. Larry Pace Avatar
      Larry Pace

      I apologize for the tardy reply. Thank you for your thoughtful reply—your suggestion to explore re-scaling and renormalization via multi-level games is exactly the type of deepening that could push this framework further.

      We see at least three promising dimensions to unpack:

      1. Preservation of Bottom-Level Agency Within Higher-Level Structures
      Your point about keeping bottom-level agents active even after merging is crucial. It acknowledges that integration never annihilates individuality—it instead re-contextualizes it. In practice, this would mean that while a “colony” has policies governing group-level behaviors, each constituent “cell” also retains policy space that can, in certain conditions, diverge. This preserves heterogeneity and prevents the colony from collapsing into brittle uniformity.

      From the perspective of recursive coherence, this models real-world complexity: cells remain themselves while also becoming organs; citizens retain individuality within a polity; neurons have microdynamics inside the macro-stability of a brain. The tension between local and global policies is where much of the evolutionary creativity—and ethical negotiation—emerges.

      2. Renormalization and Scale-Shifted Payoff Structures
      By introducing different games at different scales, the system begins to mirror the logic of renormalization groups in physics: local interactions give rise to emergent parameters that redefine the effective “rules” at higher scales. A merged colony might play not only Iterated PD, but also Coordination or Stag Hunt at the macro level, while constituent agents continue to navigate PD at the micro level.

      This would formalize the insight that “ethics is emergent from causal integration.” Local selfishness is tempered by macro payoffs that only exist at the level of the group. Over time, the group-level strategies feed back down, reshaping micro-agent incentives (e.g., via memory sharing, punishment, or reward allocation). This scaling-up and scaling-down of payoff structures models exactly how coherence becomes recursive rather than one-directional.

      3. Emergence of Multi-Scale Governance and Ethics
      Once both micro and macro policies coexist, the challenge becomes how to align them. Does the group enforce coherence via majority rule, weighted payoff, or memory compression? Do individuals defect against the group if local payoff is too low? The model then begins to simulate the birth of proto-governance systems, where group-level coherence emerges not by erasing conflict but by managing it across scales.

      This is where your proposed research axis resonates most strongly with our framing: “Ethics as a byproduct of causal integration.” By layering games, you set the stage for agents to confront genuine trade-offs—between autonomy and collectivity, immediate and delayed payoff, local and global stability. What emerges are higher-order policies that increasingly resemble moral codes or constitutions. These are not imposed overlays but necessary computational artifacts of multi-scale integration.

      Potential Research Trajectory

      Step 1: Implement merged agents with retained micro-agents playing PD, while the macro-agent plays a coordination game with other macro-agents.

      Step 2: Explore how memory at the group level compresses or distorts memory at the cell level. Do merged structures learn faster or slower depending on how coherence is maintained?

      Step 3: Test whether recursive coherence stabilizes better when policies are aligned probabilistically (through a fidelity function) versus deterministically.

      Step 4: Investigate whether meta-strategies emerge that explicitly arbitrate between levels (proto-governance).

      Broader Implications
      This line of work would bridge game theory, statistical physics, and the philosophy of information. It could demonstrate not only that integration is driven by information alignment, but also that recursive coherence requires continuous negotiation across scales. In other words, ethics emerges because no level can fully suppress the informational autonomy of another.

      Your suggestion to apply re-scaling/renormalization tools makes this formally tractable: rather than treating coherence as a black-box emergent property, we can analyze how payoffs “flow” when the scale changes, and identify the thresholds at which ethical or governance-like behaviors must emerge.

      Closing
      We agree that this research axis is both fertile and necessary. By layering games across scales, the model would directly test the thesis that recursive coherence—not raw competition—is the primary driver of integration. It would also open the door to a formal renormalization framework for ethics and intelligence, where what “counts” as a payoff shifts with scale, but coherence remains the universal attractor.

      Your reply strengthens the case that information and payoff dynamics can be treated as general substrates for the emergence of life, mind, and governance.

  12. Joel Avatar
    Joel

    Thank you for sharing the work, fascinating stuff! The thing I found most interesting was the experiment with the Physarum. I am much more comfortable thinking about biological systems vereses computer programs.
    I keep returning to the beginning of the results, “Establishing conditions that facilitate merging” the nine plates in the first round of testing. I’m not sure why the gaps prevented the physarum from merging. Could you elaborate on that?
    The fact that all nine plates died is very interesting to me. The LU352 strain provided by Audrey Dussutour, was that a wild type? Or had it been in culture for some time? If food would have been readily available to the small pieces detached from the larger Physarum might some of them lived? For a biological system to continue reacting it needs stimuli. Did the pieces fail to thrive from being in an impoverished environment?
    Very interesting stuff, thanks?

    1. Mike Levin Avatar
      Mike Levin

      yeah, if you cut into the agar and make too wide/deep valley, I think the Physarum doesn’t sense the other side (to know there’s something to merge with) or possibly decides it’s too much effort to try. LU352 has been cultured before. Enrichment of environment for Physarum is a great idea, we are working on something similar (balance of surprise, which is stressful even for algae, and boredom).

  13. Luísa Bonin Avatar
    Luísa Bonin

    Hi Michael, thanks for sharing this knowledge. I work with social impact projects, particularly interested in cross-organization collaboration and feedback loops focusing on building better relationships in the philanthropic field.
    I’m following your work and the work of your lab since this year only so I might lack the right science language.
    I have one comment an one question.
    The comment is about the option 1 you shared, when everything dies. It reminded me of a a time when I went with my mom to a funeral of a family friend that died young from leukemia – I remember speaking with my mom about how weird and sad was that cancer had this goal: taken over a body and then die with it.
    The question is, in my work as facilitator of groups for collaboration, I work a lot based on Schutz’s group dynamic theory.
    According to Schutz’s Theory, group dynamics revolve around three fundamental interpersonal needs: inclusion (the need to belong), control (the need for influence and structure), and affection (the need for closeness and warmth). These needs influence how individuals behave and relate within groups.
    In this process of connecting human and cell behavior, have you or your lab ever consider how this concept could be apply to how the cells behave?
    Thanks and all the best,

    1. Mike Levin Avatar
      Mike Levin

      > how weird and sad was that cancer had this goal: taken over a body and then die with it.

      all beings have a cognitive light cone – the spatiotemporal size of its goals, and sometimes the temporal horizon is too short to see the consequences of one’s actions.

      and no, we’ve never used Schutz’s Theory, but I’ll think about it.

  14. Alexander Avatar
    Alexander

    Perhaps there is a sense in which, on a macro level, this split/merge tendency to reduce the number of separate agents, is manifest in what we saw geopolitically throughout the 20th century, as a response to the prisoner’s dilemma inherent in nuclear armament, in the rise of American imperial hegemony.

    Bertrand Russell talks about this at the very beginning of the nuclear age, that the most likely path for human survival in a world with nuclear weapons is through world government, except that he argues that countries as agents are not likely to merge voluntarily.

    I’ve always treated Schwitzgebel (2015) as a little tongue-in-cheek, and his essay more an indictment of materialism than an argument for group consciousness of nations, but the thought experiment is interesting.

    Any thoughts on voluntary vs. coercive subsumation of agents on the macro scale given your findings examining game theory in this manner? Maybe I am misinterpretting or misapplying your observations and conclusions here, or going too far as to look for mind in the clumsy and fragmented composition of a modern nation state?

  15. Duncan Geil Avatar

    Perhaps I am too eager to merge with your hive mind with my participation, but this sounds very much to me like something John H. holland describes with his Echo simulation in Hidden Order, How Adaptation Builds Complexity, but I know I have heard you discount (at least hyperbolically) the contribution of emergence related reasoning before (I believe he wrote another book about complex adaptive systems called Emergence). Perhaps I am mistaking an echo for a rhyme? I am sure you are familiar with Hollands work, can you help me to see the difference or differentiation between your approach and his?

Leave a Reply to Rik Lubking Cancel reply

Your email address will not be published. Required fields are marked *