Q&A from the internet and recent presentations 3

Published by

on

The following is another set of questions (in no particular order or organization) that I’ve been asked after talks, in emails, on Twitter, etc. and my attempts to answer (some of the most common are here). I saved these because they are either interesting or because they come up a lot (or both).

Do I use AI such as language models?

Very rarely. I’m not against its use, but I haven’t found a good use case for it besides entertainment. I sometimes make images with Midjourney, and I occasionally ask GPT or Claude to come up with an acronym or something. Besides such creative things, I don’t think it’s reliable enough yet to use it for serious work (most of what it writes requires a lot of editing from me and fact-checking, which sort of defeats the purpose). I would be interested in use cases where it might be reliable enough to actually save me time.

Did I invent bioelectricity? Did everything in bioelectricity get invented in the last century, perhaps by Robert Becker?

I certainly did not invent the idea of non-neural bioelectricity – serious investigations by brilliant and indefatigable people date back to Galvani, Volta, Emil du Bois-Reymond’s work on wound currents in the late 1800’s, and a ton of work in the 1920’s-1990’s. See for example my write-up on the Godfather of bioelectricity, H. S. Burr, or an interview with Rich Nuccitelli, a truly great contributor to this field who spans the classic and modern eras. A lot of the old work is referenced here and especially on p. 298 here. My lab and work owe these pioneers a lot.

However, as in all science, the early work did not answer (or even identify) all the key questions nor push into all the possible applications. What is new, since our work (starting in ~1998 or so) is:

1) First molecular tools to read and write bioelectrical states of non-neural tissues in morphogenesis and connect these processes to downstream gene expression. This enabled bioelectricity work to finally be connected to the on-going advances in molecular genetics – important for bringing the field into the mainstream.

2) My idea that somatic bioelectricity is a kind of cognitive glue, just like bioelectricity in the brain; the idea that cell groups form a collective intelligence that navigates anatomical morphospace as problem-solving, and that bioelectricity is the medium for its memories and computations, and the interface by which we can re-write these memories and guide the living system toward desired outcomes. In other words, that bioelectricity isn’t just another piece of physics that we need to keep track of in development or regeneration, but a uniquely tractable entrypoint to the cellular collective intelligence.

3) A focus on patterns of resting potential (Vmem) (and not electric fields), and methods to understand how spatial distributions of membrane voltage can encode organ-level information, showing that it can be used to trigger subroutines that build entire organs (e.g., this).

So, absolutely the idea of bioelectricity has been around for a very long time and people like Robert Becker inspired me and my lab. But the field is in a very different place now, and neither the link to molecular biology and behavioral sciences, nor its advances in biomedicine of birth defects, regeneration, cancer, and synthetic bioengineering, had been made by prior workers.

Isn’t everything you talk about a kind of computation?

I’m a (former) software developer too, so I get it – I love coding. But it’s not so simple. I am currently writing a paper called “Booting up the agent” which is all about this earliest process, comparing (and contrasting) to what happens in a traditional computer, in the nanoseconds/microseconds when it first changes from a hunk of metals obeying the laws of physics to a computational device obeying an “algorithm”. It’s a deep question as to how living systems boot up, and in general I’ve talked about a lot of the differences between traditional computers and the agential material of life, and thus the places where standard computer metaphors fail us. For example, living things have to uncompress (with creative improvisation) and decode their own memories, via polycomputing, and to construct their own hardware as it goes along – it’s not like a Turing machine where the data and the machine are cleanly separated.

Two recent papers have critiqued the “agential stance” (DiFrisco J, Gawne R. Biological agency: a concept without a research program. J Evol Biol. 2025 Feb 1;38(2):143-156. doi: 10.1093/jeb/voae153. PMID: 39658090, and
Potter, H. D., & Mitchell, K. J. (2024). A critique of the agential stance in development and evolution. In The riddle of Organismal Agency (pp. 131-149)). Both argue that loose ascription of agency is neither warranted nor helpful and that more general systems principles (the products of which are iteratively subjected to the verdict of natural selection) are up to the task of explaining the phenomena that may be prompting the agential stance. What do you think about that?

This comes up a lot. In all of my relevant papers (for example, here), I say explicitly that loose ascription of agency (i.e., treating agency as a linguistic or philosophical matter that can be brought in according to one’s preferences) is indeed inappropriate. My point is that we now have rigorous ways to determine precisely when ascriptions of agency (i.e., porting of tools from cybernetic and behavioral sciences) is appropriate: if and only if it facilitates new discoveries and new capabilities at the bench. The DiFrisco paper makes an empirically incorrect claim about a lack of research agenda. These ideas not only have a research program, but the research program has been paying off very well, in terms of new discoveries heading to the clinic and of new capabilities in bioengineering. The key is that loose ascriptions are indeed unhelpful, but that is not what we are doing. What we are doing is specifically showing, through examples of new discoveries (not simply “explaining” things that had been already done by someone) that this stance leads to finding new biology and new capabilities.

Potter and Mitchell argue that the kinds of evidence often put forward in literature, focusing exclusively on cases where embryonic regulation or adaptation to challenges is successful in achieving a viable outcome, gives a mistaken, seductive impression of morphogenesis as performed by active, problem-solving agents. But this represents both an ascertainment bias and a confirmation bias. Wouldn’t a more comprehensive survey of the ways in which development often goes wrong, or fails to regulate, or even the many cases where plasticity responses are themselves maladaptive, undercut this impression?

This is an interesting question. Indeed the failures of development are as (or often more) informative than the successes – we don’t think they undercut the point at all. The limitations of a cognitive system as it navigates a problem space says much about its level on the spectrum of agency and its properties, like for example this classic popular book did for neuroscience (in any case, the study of disorders is one of the most tried and true strategies for making progress in neuroscience and medicine). For example, it is critical that embryos can make mistakes – a clear sign of a goal-seeking system. I am writing a separate paper on the errors of morphogenesis and how cognitive pathologies give helpful insights into the etiology (and repair interventions!) of birth defects, as a companion to this paper which describes morphogenetic competencies as successful instances of intelligence

Can’t all the examples you give in your talks be accommodated by standard views of genetics and emergent complexity?

Everything hangs on “accommodated”.  Yes, the discoveries that have been facilitated by these ideas can, after the fact, be accommodated by general systems principles, or indeed, can be accommodated by physics (in the minimal sense that they are not inconsistent with them). Systems principles are not wrong, and the discoveries did not find magic. But the notion that, therefore, these principles are sufficient and talk of agency should be eliminated, does not follow from this for two reasons.  It was not those principles that led to these discoveries – they did not predict them (in fact they blocked them; for example, 2-headed planaria were first described in 1903, but no one bothered to recut them until 2008 because it was considered obvious that they would go back to the genetic default of 1-headedness in the absence of genomic editing).  For us, the question is not, can you accommodate discoveries after the fact, but does a particular set of conceptual tools (and the technology they imply – such as bioelectric reading of prepattern memories outside the brain, first developed in 2002) uniquely facilitate new discoveries more than status quo. And of course, any complex instance of human cognition can be said to be “accommodated by physics”, and any standard instance of molecular biology explanation can be said to be “nothing more than quantum foam doing what quantum foam does”. The problem with such statements is that they are not wrong, but they are sterile for next steps of discovery.  They simply don’t move science forward.

In your work on diverse intelligence, are you talking about real cognition/intelligence, or only “as-if”?

Let’s assume that genuine agency actually exists – in brains, and has a mechanistic basis.  Beyond this, one position that might be taken is: outside brains, we don’t see any agential phenomena only robustness and attractor phenomena and everything else is “as-if”. One simple answer is that the kinds of problem-solving behavior we see in morphogenesis is underwritten by evolutionarily-homologous mechanisms to the way it’s done in brains: bioelectricity.  Taking cognitive science and evolution seriously, why would one say that one is genuine and the other is merely as-if?  If neural ion channels, gap junctions, neurotransmitters, etc. run algorithms for measuring and reducing error, storing memories, etc., in the brain, then one needs a solid argument for why the exact same phenomena, mediated by the same molecular mechanisms, do not count as “real”.  I have yet to come across any convincing principled argument for why agency is real in the brain but as-if elsewhere, other than a desire to keep to pre-scientific colloquial usage.

Also, I think an “as-if” stance overall is dangerous because it implies its alternative: “real”, suggesting that some other concepts (pathways, attractors, etc.) represent a true, un-distorted view of reality. I don’t believe that as scientists, we have access to any absolute ground truth and that all our formal models are metaphors, somewhere on a continuum of “how well do they enable new discovery and effective applications”, capturing some but necessarily not all features of the world with metaphors of different degrees of fecundity and insight. In other words, a binary view of the “real vs. as-if” distinction is unhelpful here as it is almost everywhere else.

Your example of kidney tubule formation, despite an induced radically different cell size, in newts does not strike me as remarkable – I think many endothelia in many species (including blood vessels in vertebrates and the tracheal system in Drosophila) use a strategy whereby large diameter vessels are wrapped by multiple cells, and smaller ones by single cells (which bend more in the process). So this example doesn’t seem especially notable, as something similar already happens in many species.

This is true, and I am not claiming that newts found a solution that no other biological system has found. What we are saying is that they can find and deploy a solution that is not a normal part of their developmental repertoire (at least, in the kidney tubule), which reveals a problem-solving capacity to deploy novel behaviors in an appropriate, context-specific, but not hardwired manner. This non-magical but remarkable process is common in behavioral problem-solving examples of intelligence testing and doesn’t require a truly novel set of behavioral components, but appropriate and effective improvisation which can include mechanisms used elsewhere.

The notion of cognitive glue, as necessary to bind cells toward a specified target morphology, assumes that final morphology is a predefined target that a collective can orient towards. Isn’t morphology merely emergent? Classical models of self-organization, such as Turing instabilities or other pattern formation models, do not have a “set point” but the stability of the fixed point is an emergent property of the interactions of the elements of the system, emerging from the dynamical interactions among them.

It is true that Turing (and other reaction-diffusion) mechanisms do not depend on explicitly minimizing distance from  a setpoint. Indeed, some morphogenetic events (e.g., wound healing) can be usefully explained as goal-less, pure feed-forward emergence (purely open-loop, like cellular automata or reaction-diffusion systems), but some cannot. It is important to remember that the idea of morphogenesis as the result of a process with 0 feedback (no setpoint homeostasis) is just a hypothesis, and a priori not more plausible (especially given the importance of feedback in biology) than models in which there are one or more levels of homeostasis, and thus setpoints (a.k.a., simple goals). Moreover, some capabilities (specifically, many of interest to current biomedicine) are not facilitated by a 0-feedback model because they require solving an extremely hard inverse problem between genetic information and system-level outcomes of anatomy and physiology. In many of my talks and papers, I show that some morphogenetic events do have an encoded setpoint because we and others have succeeded in re-writing it (which is the practical criterion for being able to say convincingly that a system has a goal).

Doesn’t the notion of a stable morphogenetic goal conflict with stable defective outcomes? Does this mean that morphologies that deviate from a supposed “goal” could not be constituted by cells in metabolic/physiological equilibrium?

Indeed, some “malformations” are themselves stable states (like our double-headed flatworms, which the re-cutting experiments of 10+ years of students in undergraduate classes have confirmed). It is crucial that stable states are re-writable, which means that future processes will reduce error toward a different outcome than before – they are often not hardwired.  My claim is not that there is some universal way to say what pattern is “correct” and what is “malformed” – simply that some processes in morphogenesis use corrective feedback to achieve a specific outcome.

Why do you do all the outreach?

            My goal is to: a) foster Open Science (information available to all who want it), b) reach young people early in the STEM pipeline, to hopefully impact their thinking before their conceptual frameworks calcify and their commitments make it hard to change course, and c) reach experts in other fields who may bring us valuable insights and new tools from other disciplines (foster trans-disciplinary efforts). It should be noted that I actually don’t do much extra work for the outreach – mostly all I do is hit Record when I give Zoom talks or have (some) meetings with colleagues which I’d be doing regardless, and put them directly on-line. I don’t edit or do anything else to improve popularity, I don’t have sponsors, I don’t charge for it. I’m not interested in optimizing the numbers of generic viewers or followers – I want the material out there and beyond that, I just assume that those people, who have the most ability to do good with it, will find it. The two exceptions are: a) my blog (where I write things that don’t fit into standard academic papers, and things that are my personal opinion and not an official stance of my lab), and b) interviews I give on podcasts and such, which I use to practice: each time I do one, it’s an opportunity to hone the way I present information to make it clearer and more convincing.

Can’t other conceptual interpretations of your work be given besides the ones you emphasize?

Yes, as with everything in science, multiple interpretations can exist. Of course after someone does something, there are (infinitely) many interpretations. Specifically, whatever you do, after the fact, someone can decide to focus on the particle level and say “see, totally managed by physics”. I think you have to call these things in advance, not after someone else has done the interesting thing. The question for us all is: which conceptual framework facilitates you to do the interesting experiments in the first place. My specific claim is that all the new stuff we’ve discovered over the last 30 years is new (i.e., hadn’t been done before) only because we have a framework that is especially fertile for a large domain of new findings.  Time will tell. No need to argue – everyone grab your favorite conceptual framework and see what cool things it lets you discover.

Why don’t you drop all this philosophical stuff, the experiments are good, just do those.

Impossible; the reason we did those experiments (and others hadn’t) is that one’s conceptual framework makes certain things invisible/un-askable and suggests looking at other things. Every framing is showing you something and hiding other directions. My only claim is that my framework (being updated constantly, not frozen) has shown value in pointing us to new discoveries and new capabilities (empirical findings, some already heading toward the clinic), and suggests a massive research program for on-going and future work. I am not saying it’s “correct” (whatever that could mean) or that I or someone else won’t find a much better one soon (I hope so!).

Do you claim to have established consciousness in Xenobots and such?

I haven’t made any strong claims about consciousness yet at all, in any system, nor pushed a particular theory of consciousness. Nor (as of August 2025) have I made any cognitive claims about any of our biobots (stay tuned, the data are coming). All my strong claims have been about publicly observable functional capabilities – cognition, intelligence, problem-solving, learning. I have begun to say a few things about consciousness in general, for example here.

Convincing patterns in eye vs. gut cells battle: does the “cellular debate” you described happens to coincide with any existing consensus algorithm you know of, or is it fundamentally different?

            Good question!! We’re not sure. What we do know is that it doesn’t go by majority vote. Sometimes a tiny piece of tissue convinces an entire organism to do something totally weird.  We’re even toying with the idea that it might be novelty (like, “here’s an idea we’ve not had before, let’s do this!”) that is what sometimes puts it over the top. We’re working on it, but we don’t know exactly. It’s a very important problem for biomedicine because the name of the game there will be: find the most minimal, but most convincing, intervention for the target tissue.

It’s absurd to say your liver is conscious!

“Absurd” is ok as poetry – yes, it certainly seems absurd given how baked in the parochial views of other minds are. Like a lot of art, it calls for us to examine ourselves to see why it seems so absurd, and perhaps shift our priors. But what that statement isn’t, is science, engineering, or even philosophy. In order to be any of that, someone would have to not just *claim* it’s absurd (as the implications of relativity, quantum mechanics, and a lot of math seem, to our classical intuitions), but actually do the hard work and clarify *why* networks of other kinds of cells (with their ion channels, electrical synapses, neurotransmitters, microtubules, etc. etc.) can’t possibly do some degree of what the neurons in your brain do. Good luck. “Absurd” makes it seem like, of course there’s a convincing consensus story explaining it. But there isn’t. And more importantly, they’d have to show what practical benefit comes from their view – produce something useful, interesting, etc. – show the fertility of this binary way of thinking. I’ve yet to see either. In contrast, I’ve summarized many times the empirical benefits of investigating the continuum view. So do poetry if you want, but let no one confuse it for a scientific comment on a scientific claim, or a useful engineering pointer.

Anyway, my position isn’t that we know our organs to be conscious (any more than we *know* each other to be conscious). My position is that for the exact same reasons people currently attribute consciousness to brainy animals (and I list those reasons here), we should take seriously (i.e., try to shoot down in a principled way – not by fiat) the possibility that other kinds of meat can be a conduit for consciousness too.

Any “goals” a body organ may have are limited to that body.

Yes, probably. I didn’t claim you’d be having discussions with your liver about the financial markets or the latest film you saw. It cares about things happening in physiological state space of itself and its neighboring organs, and with new tools (under development now!) we will, hopefully, get a glimpse of its world and communicate with it about the things it cares about. Btw if it didn’t have both care/intent and a degree of intelligence to meet its goals under challenging circumstances, we’d be dead in short order. In any case, all cognitive beings are limited in how much of the world they can take in.


Why do you argue with people online? Or, conversely, why don’t you argue, when someone obviously has your view wrong or is critiquing it in an invalid manner?

I mostly don’t engage because it’s an infinite task – it would eat up all my time. All I can do is try to be as clear as possible in what I personally say, and let others figure out what they want to go with. When I do reply, it’s almost never about convincing the original person arguing – it’s to clarify my position on points that others may be thinking as well. I’m currently experimenting with having Comments open on our Youtube channel (although I never read them or have time to reply to them). I think the jury is still out as to whether it’s helpful to allow them or not. But the comments on this blog tend to be actually quite high quality – I usually respond to those.

‘Emergent’, in my understanding describes, not something unexplainable, never seen before, but rather a behavior that “emerges” from the interaction of multiple “lower level” elements that none of those elements is able to support by themselves. Scaling often leads to emergence.

Letting slide for a moment the fact that this definition uses the very word it’s trying to explain, the problem is that this definition basically covers pretty much everything. The total number of degrees of the angle of a square – emergent? The conclusion of a valid proof – emergent? The sum of a converging series? The derivative of a moving object’s position? The integral under a curve? The sweater made from string (and its knots)? The parity (odd or even) of a string of digits? All emergent? Or are some things not emergent? What’s the decision procedure? I just don’t see anything it enables us to do. I’m still trying to understand what the word adds to our practical understanding or capability, whether it’s a natural kind or a temporary label, whether it’s relative to an observer’s state of knowledge or surprise, etc. I don’t think anything that flimsy can be used to support the thing some people want it to support (sharp distinct categories separating “real minds” from “dumb machines”), which is the only reason we’re talking about emergence – I don’t actually care about it one way or the other, except that people think it carries some sort of oomph with respect to propping up binary categories or as an alternative to an ordered space of patterns to be investigated. That’s the only reason I’m pointing out its failings – it’s fine, right until you try to use it for anything important or new. Maybe it even has some uses somewhere, I am not claiming it’s impossible to find a use for it, I guess; just that it doesn’t do the job it’s often asked to do.

Why look for life extension? Our short lives (and our impending death) is what gives us meaning.

I could almost buy the importance of short lives if we all had the chance to intentionally curate exactly what (and whom) we wanted to appreciate, experience, and love in the short time period available to us. But most of us don’t – huge numbers of beings world-wide spend their short lives without that opportunity; they get the short life, but not the benefit of being able to be choosy and thoughtful about how to spend it. So I would rather say f-u to the arbitrary limits set by frozen evolutionary accidents, cosmic rays, viruses, and selection pressures that don’t care anything about love, beauty, or meaning. Maybe we can do better; at the very least, we can try. And it’s not at all obvious to me that lives that were however long you wanted them to be, not how long your telomeres or your blurring bioelectric patterns wanted them to be, would have any less meaning. In fact, maybe it takes longer than ~80 years to gain the wisdom needed to really have an agential life. Evolution doesn’t care about any of that and there’s no reason to think that our current lifespan is remotely sufficient to truly experience what we are capable of in terms of creativity, wisdom, and compassion. (And btw, if we think ~80 year limit is great for making each moment count, then heck why not reduce it 20? or 10? then every moment would really count!)

But even water can solve a maze!

Right, aspects of physics and materials properties underlie intelligence (as is well-understood and exploited by the field of morphological computing and others). Is the claim in that video that “real intelligence” has to mean there’s no physics underneath? Or that when we figure out the physical mechanism, a problem-solving competency ceases to be a kind of intelligence and becomes “just physics” – intelligence is whatever we can’t find any explanation for? Or that all intelligence requires the agent to know what problem it’s solving (as the voiceover mentions slime mold not having) – there’s no intelligence below reflective metacognition? Like most such pseudoproblems, the confusion in the argument in that video is caused by assuming a binary “is/is-not intelligent” (and also not specifying what “real” intelligence would mean exactly, and how it scales up from the activities of materials like water that make up our brains and bodies). I propose a simple engineering perspective on this issue: every material component has certain competencies I can count on when building with it. What can I count on water to do? Like many materials, I can count on it to minimize certain quantities in a context-sensitive manner (go down a gradient), and perhaps other things that need to be discovered by experiment (not assumed from a philosophical armchair – see Gerald Pollack’s work). It’s likely on the way left of the spectrum of persuadability, but it’s not 0!! I know it’s not 0 because I can depend on it as an engineer, and it saves me time and effort not having to micromanage it. I can count on Physarum to do more, and apparently I can count on chemical networks and materials to learn (see our papers on training GRNs, Walter Fontana’s papers on probabilistic inference by chemical circuits, and a recent review about learning in materials). And it’s exactly from those kinds of minimal capabilities that more complex kinds of intelligence are built up. Where else would they come from?! No big gotcha or paradox here, just need to leave binary thinking behind and take an empirical, practical approach. Cognitive claims are interaction protocol claims. We’ve known for >100 years now that some materials exhibit properties that, when treated with the tools and concepts of behavioral science, provide more empirical tractability than is afforded by not doing so, in deference to ancient categories propping up university department boundaries. All of this has been discussed in numerous reviews in the diverse intelligence field but still causes a kind of pearl-clutching when it is suggested.

I feel guilty about doing experiments with planaria and other small creatures…

yes; I feel guilty about the small creatures as well, absolutely. But I feel more guilty about letting down the human patients who email me every day in the most horrific medical suffering who ask, “wtf is taking you all so long, figure this out!” That’s one of the tragedies of this world, no one has their hand off the trolley lever. Doing nothing because you feel guilty about scientific research is just shifting victims. Now, I’m not mocking the view that says “I see the suffering, and I consciously decide that I’m not smart enough to adjudicate the morality tradeoffs so I will refrain from action”; I can respect that view (assuming it’s coming from a vegan who spends most of their time fighting against factory farming etc.). But almost no one really has that view (because even as people complain about Xenobots and such, they uniformly run to the hospital when their kids get sick, and pray like hell that someone has figured out something). It’s 99.9% lack of knowledge/imagination about where medical treatments come from and what you would do if you or someone you loved had a treatable problem, and 0.1% experience-informed deliberate decisions. I’m just estimating the percentages here, I really don’t know…

Machines, like computers, can’t produce consciousness.

I agree, computation – as in, our formal model of algorithms – cannot produce consciousness (“understanding” is not synonymous, so we should decide which thing we’re talking about, and perhaps even specify how biochemical brains get around these limits). But, I think not even simple “machines” are well-captured by those formal models. Let’s not mistake the model, and its limitations, for the real thing. More soon, but I cover this a bit here and here). Also I don’t think anything we do – synthetic or biological – “produces” consciousness – what we do is make interfaces that facilitate the ingression of consciousness into the physical world).

The stuff you talk about, such as collective intelligence, has been mentioned in many ancient texts.

Yes absolutely, some of these are very old concepts . Btw my favorite classic version of this idea is that of “group karma” – very close to collective intelligence (CI) in its approach to composite agency. But to clarify: like many very old concepts, the useful thing is not to “merely mention” them, but to find actionable ways that they advance research. What I did (that’s relevant here) is to formalize how collective intelligence can be used to understand, predict, and control morphogenesis as problem-solving behavior in anatomical space, and my lab created tools and used them to test specific hypotheses of biophysical mechanisms by which the goal states of biological CI’s can scale up or down. There’s also some other stuff about seeing molecular pathways as CI’s that can be trained, which is being developed for drug conditioning applications etc. These things unlocked a number of new paths forward in regeneration, embryogenesis, and cancer suppression and enabled a roadmap, pursued by people in my group and others’, of development of new interventions (communication with that collective intelligence) that are heading towards the clinic (also bioengineering). It’s important to keep up with the primary literature, to see the practical outcomes of specific concepts and what is or is not moving beyond being merely mentioned.

I disagree with your emphasis on compassion toward novel kinds of creatures and minimal agents. We should be focusing on humans and such.

I agree with this – we have no argument. I say exactly the same thing – “our capacity to deal ethically with other minds we KNOW to exist is very low” – to people who email me about the ethical urgency of figuring out the status of Xenobots etc. It’s not because once we know they can suffer, we will suddenly behave better; factory farming of pigs etc., and human history with each other, tell you that doesn’t just happen. However, I do think it’s critical to advance the study of diverse minds, for the following reason. And maybe it won’t work on the current generation, but it’s important for the forthcoming ones. If we get a mature theory of mind going – one not limited by the blinders we have on now – then maybe, just maybe, it will be harder and harder for us to deploy that in-group/out-group dynamic that humans love to set up. “They are not quite like us, so we don’t need to treat them with compassion”. Imagine – if the kids of the future understand the full range of our cognitive kin, won’t it seem like a much more obvious lift to be nicer to things that, comparatively, are very much like us? That is my hope. I have no idea if it will happen, but in my more optimistic moments, I think that a scientifically-grounded theory of diverse intelligence will, in the long run, make it harder for people to do the mental gymnastics required to treat others as fundamentally incapable of real suffering like “us”. Let’s expand the spectrum radically (to the level that science supports, not fantasy), and the distance between us will shrink exponentially. The differences we get worked up about today will be laughable to mature humanity in the future.

Autonomic processes like cardiac rhythm or digestion perform intricate, life-critical functions without any conscious agency.

            Well, you don’t know that. The conscious agency of these systems is not usually available to the main, linguistic consciousness of the human body (the one who wrote the above question), but that doesn’t mean they are not conscious themselves. That is, you don’t feel my consciousness either, so it’s no surprise that your left hemisphere “you” doesn’t feel your liver being conscious. I gave a talk about that for example here.  I don’t have a strong new theory of consciousness out in public yet, but I do think we need to be very careful with assumptions as the above.

Might voluntary motor circuits possess a level or type of self-organizing informational complexity that naturally aligns with the emergence of conscious experience, while autonomic systems, though complex, do not cross that threshold or gain adaptive benefit from such phenomenology?

            It’s possible, but we’ve been doing analyses (attached is an early example) of non-brainy signals using the same metrics neurologists use to distinguish a pile of neurons from an aware human mind with locked-in syndrome (for example), and – as I predicted – the results are very interesting.  Much more coming on this, so I am skeptical of the above distinction. Nevertheless, I do think it’s interesting that our linguistic consciousness picked 3D space (and the muscle actuators needed to move through it) as the space it’s aware of, instead of the many other spaces in which our bodies operate (physiological, anatomical, etc.).  I suspect it’s an evolutionary reason that could readily have been otherwise, and we are making tools to try to give language to these other intelligences.

What do you think of Sheldrake’s hypothesis?

Sheldrake proposes a sensitization in the universe, a generic sensitization, and in a certain sense, I go wider than him because I think the universe is full of not just sensitization, but all other kinds of other cognitive capacities that keep being found in matter. I don’t make the larger claim strongly because I don’t have a way of doing experiments at that scale.

How should we understand “meaning” in biological systems?

I think it’s the same in all systems (biological or not), but biology is what we tend to call the study of bodies of systems that are good at generating their own meaning and making it easy for us (as biological systems) to see that they’ve done so.  I suspect it has something to do with structuring one’s experiences, memories, etc. in a way that provides long-range order. In other words, meaning is a kind of high-order generalization, interpretation, saliency, etc. that extends far from the immediate utility or applicability of a mental structure. It’s about extending the cognitive light cone of a thought so that its relevance to time and place far beyond the current, practical context becomes established. Here’s a paper on it.

Real minds operate by reasons; machines do so by causes.

This is a very thorny area, if you assume that our brains obey physics. Basically it’s very hard to specify what those words really mean, in a useful way. Here’s an attempt. Reasons are what we call it when a system obeys high-order patterns in the Platonic space, causes are what we call it when a system obeys the low-order ones. I think it’s a continuum.

You talked about the generic problem-solving capacities of systems. My question is: how complex must these systems be? The generic problem-solving capacities of a rock and a biological system (artificial or not) are bound to be very different. Or do you think this is a basic property of any matter?

1) if we ask what would the most simple, most basic, low-end version of intelligence look like? We know it won’t look like advanced intelligence, we’re purposely looking for the lowest point in the spectrum. What is the minimal requirement? I think there are a couple of criteria, but one key one is the simplest form of goal-directedness – least action laws. Even a humble photon manages to find the least-effort path to its target (and amazingly, does it despite the fact that you have to already have traveled all the possible paths to the end, to really know which is the fastest one!  Where is its machinery to do so? I think it’s on the other side – particles are interfaces to the smallest minds in the Platonic space, all they know is this one trick). I think least action laws are what intelligence looks like (and we call it physics, we’ve decided it’s not a branch of psychology but I think it is).

– so how do we recognize intelligence when it’s so minimal?  To make progress, I like a simple, observable engineering property: autonomy. Specifically, for any system, how much can trust it to do without me being there to force it?   Like, if I’m building something and I have a homeostat module, I know what I can count on it to do – keep a certain variable in a certain range – I can delegate that job to it.  It has non-0 autonomy so I think it’s on the spectrum. But surely the rock doesn’t do anything by itself?  Actually, it has a minimal capacity. Imagine: as an engineer building a roller coaster, I know I have to force it up the mountain. But I don’t have to do anything for it to come back down! A true 0 would be something where I had to worry about *everything* it was supposed to do. But even dumb rocks and single particles obey least action principles which means, there are some things I can trust them to do. They will follow a gradient; they won’t do delayed gratification for example, but they will do the simple “taxis“.

– I asked Chris Fields once: is it possible to have a world in which matter doesn’t even know how to do least action? He said, the only way to do that is have a universe in which nothing ever happens (so that’s interesting – a universe with 1 particle in it and nothing else has 0 intelligence in it, but as soon as you have 2 things, the journey begins). But it means that in our world, there is nothing so stupid as to not be on the spectrum of persuadability, and thus I believe that in this world, intelligence of some sort is baked into all matter – there’s no truly dead matter anywhere, minimal though it may be. More accurately, it’s not baked into the matter – matter of even the simplest kind is already a tiny interface or pointer through which certain kinds of patterns can ingress from the Platonic space.

– So what about the rock. I think we call “life” those things that are good at scaling the intelligence of their parts. The rock doesn’t do anything that its particles don’t already do, so we call it dead and not part of biology. The cell does many things its parts don’t do, and so we call it life. Life, I think, is what we call systems that significantly align their parts so that the cognitive light cone of the whole is bigger (and projects into a new space) than that of its components.  Richard Watson has some additional useful thoughts on what the parts are doing. But I think life is what we call systems that channel much more impressive patterns than their parts can.

2) there’s one other thing, though currently very few like this line of thinking (the mechanists think there’s nothing special in life, and the organicists think the magic of life doesn’t exist in “computers”). Even extremely small, deterministic, obvious algorithms – like bubble sort – as it turns out, are doing things not explicitly in the algorithm. They do delayed gratification and some other side-quests no one noticed in 60+ years of study (because everyone assumed they could only do what’s in the algorithm, being good machines). These extra things they do is not merely unpredictability or complexity, but patterns that would be recognizable to any behavioral scientist if they appeared in a more familiar guise. This (and other as-yet unpublished data) is telling me that

– our models of dumb machines that only do what the algorithm says are as unfit for relating to actual machines as they are for life (i.e., there may not be anything anywhere in our world that is fully encompassed by our formal models of algorithms);

– we call “life” those systems that really, really amplify these intrinsic motivations so that it’s obvious to us;  the ones that do it only “a little bit” we sweep under the rug and call them machines.

– we can say it’s “emergent” from the algorithm but that doesn’t help anything, it totally breaks the whole point of an algorithm which was supposed to describe what the thing is going to do!

– this means that I’m not a computationalist (I don’t think following an algorithm makes you alive, conscious, etc. etc.) but I *do* think that things we call machines are on the spectrum with us, not because of their algorithms, but precisely because they don’t just obey their algorithms any more than we do.  I think this ability to get more out than you put in (which I blame on the ingressions from this Platonic space) is a magic that haunts everything – biological bodies, engineered bodies, software systems, etc.  Interestingly, this idea (that our connection to the ineffable in this space is not unique to us, to life, to cells, etc. but is also accessible to mundane “machines”) gets me more hate mail than my claim that cells and molecules have memory, cognition, etc. The mechanists try to be rational in shooting it down – they argue that the best stories will always be in terms of chemistry, nothing above (also, weirdly, not below either – somehow no one wants to go to modes of quantum foam, love chemistry for some reason). But the organicists and the religious folks, who like my non-reductionism otherwise, can get pretty nasty. We want to be exceptional I guess, and if it can’t be just big brains, or some brains, or cells, then at least it better be “organic material” that makes us unique. Some corner of the world needs to not be special for us to feel special. That’s my amateur psychoanalysis of what’s going on. My unpopular stance on the ineffable also being able to ingress through mere “machines” does have one up-side: it keeps at bay some communities who would otherwise claim that my views support their human-centric, non-science agenda.

– so the AI’s may have something interesting going on, but not because they talk as if they do – the talking is irrelevant because we made them talk with an algorithm. What’s much more interesting is what they might be doing that the algorithm doesn’t mention. If dumb bubble sort can do it already, I strongly suspect LLMs and other large constructs probably do a lot of things we haven’t checked for yet (as does even minimal life, and even “non-living” materials). But we don’t know, and the language use may be a red herring.

When I mention xenobots, people often respond that their form might not be truly novel, but a past, encoded form re-triggered by conditions. Is there evidence that rules this out?

ok, we can think about it in 3 layers. (and it’s not just Xenobots, also Anthrobots, and much more coming).

1) let’s say it’s true. It’s amazing that “conditions” can trigger a totally different ancient form. What are these conditions? Both kinds of bots live in their totally normal physiological media, and have no genetic edits, no synbio circuits, no drugs, no scaffolds. The only thing that was done is to take away neighboring cells. It’s quite remarkable that this alone is sufficient to trigger a past form of morphology, physiology, and behavior, and this is a new finding. Also, what does it mean “encoded form” – encoded where? We think we know what DNA encodes – proteins, and some timing information about their expression. Are we saying that every genome also “encodes” all the past forms? Is there a suggestion of how to read them out? There are some thoughts on what “encoding” might mean here, here, and here.

2) it doesn’t seem to be true. What past form of life in the human lineage was supposed to look and behave like Anthrobots, have their transcriptome (9000+ genes differently expressed than the cells they come from), have the ability to repair nearby neural wounds, etc. etc.? I’m not aware of a good candidate for this.

3) it’s not a hypothesis. “might not be” is cheap and sterile; it doesn’t lead to anything because it’s compatible with anything. The more beneficial thing would be to a) say what prior form it was, and b) offer some predictive value about *what* form (morphologically, transcriptionally, and behaviorally) would be called up by specific environments. In the absence of that, it’s not a helpful or falsifiable hypothesis. One can always say that “maybe something in the past explains it” but that gives no benefit to efforts in evolutionary biology, bioengineering, or biomedicine which benefit from predictive models of the problem-solving competencies of living material. In other words, we want to understand where new forms of behavior (in anatomical, physiological, and other spaces) come from, if it’s not selection, and how we can make better interfaces (embodiments) for the forms we want. I don’t believe it’s plausible or useful to assume there used to be Xenobots/Anthrobots that specifically were selected to be good at what we see them doing now. If we say that they learned to do what they do at the same time the genome learned to be a frog/human given selection forces, then we break the specificity expected of evolution – to explain life’s properties by a very specific history that led to it vs. to something else. I think it’s more useful to try to understand how groups of cells implement massive plasticity to find novel ways of being despite their standard hardware, and to map out and exploit the structured latent space of possibilities which genomically-encoded hardware can access as a problem-solving competency (a.k.a., intelligence).

39 responses to “Q&A from the internet and recent presentations 3”

  1. Leah Avatar
    Leah

    My two cents: Don’t open YouTube comments. Unless you take the time to get rid of garbage on there it quickly becomes a cesspool. I personally love that you just leave your videos there with comments closed. There’s something kind of great about it even if I’ve found it annoying sometimes. 🙂

    1. Mike Levin Avatar
      Mike Levin

      ha yeah it was an experiment because so many people were asking for it. I haven’t looked yet to see how it’s going. Probably I’ll close them again.

  2. Jack knight Avatar
    Jack knight

    Amazing answers. So much to download and re-read. My son i hendryx s a sophmore and loves chemistry, robotics, science, math, and computer science. We watched your interviews on YT and this info will help him connect the dots.

  3. Tony Budding Avatar
    Tony Budding

    Hey Mike, more great stuff here. I continue to be so impressed with your experimental integrity while still thinking outside the box. As you know, my work is almost exclusively with unmeasurable phenomena. Many simply relegate the unmeasurable to pseudoscience or armchair philosophy, and while it often is that, it doesn’t have to be. I’m not here to make this argument, but rather to offer some suggestions regarding the concept of consciousness.

    You say you have not made any strong claims about consciousness, yet you use the term frequently and in different contexts. In science, philosophy and epistemology, the absence of a clear definition for a key term is an impediment at best and often harmful. My suggestion is that the term consciousness is flawed because what it is being used to reference are aggregations of a wide variety of experiential phenomena.

    Instead, I think you will have a lot more success by breaking the phenomena referenced into more specific functions. For example, the phrase awareness of content can frequently be used in place of the term consciousness. This distinction is massive because awareness and content are both varied, so this gives you two different levers to pull when trying to figure what’s driving your empirical findings.

    The generation of the awareness of content is a deliberate effort. Deliberate efforts are also varied, so now you have three levers of increased granularity that can be used to guide experimentation where before you had one vague term used imprecisely and inconsistently.

    Determined efforts are driven by an agenda that is related to self-perpetuation (any sense of self will do), and involves some version of perceiving the environment and responding to those perceptions in a way that is intended to achieve the agenda. The qualities of each of these aspects vary based on several criteria.

    One of the biggest questions in sentience and intelligence is the relationship between the material/energetic/measurable and the experiential/Platonic/unmeasurable. A useful metaphor to contextualize how to think about them is the relationships among cameras, photographs and light. Change the settings on the camera and you change the photographs produced, which can make you think that the camera causes the photographs. But a camera is useless in the absence of light. All good photographers are acutely aware of the qualities of light and how they affect the images produced. Of course, we can measure light, so the metaphor is imperfect. The value then is in differentiating experiments with variations in the camera separately from variations in the light.

    Certainly the more sophisticated agential forms have the ability to use multiple forms of self to generate various agendas. A liver cell can act on behalf of its own survival, on behalf of the survival of the liver as an organ, and on behalf of the creature in which the liver resides. These variations in self-orientation and agenda drive different determined efforts and thus different results.

    Perception requires the acquisition of material data from the environment, but it also requires interpretation to convert the material data to experiential or Platonic data where it can be analyzed relative to the agenda. You’ve already demonstrated how learning experiences alter the qualities of the responses to the same material data.

    Finally (at least for this comment), the intelligence that compares converted material information against the existing set points, expectations, or agendas in order to determine a response is variable (we can think of it as a confusion to clarity continuum).

    In summary, if you continue doing everything you’re already doing, but replace the term consciousness with some set of these variables, I am confident you will have more success in both determining effective experiments and understanding the variability in agential performance.

    1. Mike Levin Avatar
      Mike Levin

      Thanks. This is all interesting of course. To my knowledge, I’ve only used “consciousness” in a couple of talks specifically at conferences on consciousness where that was the topic (and even then I point out in the 2nd slide that I do not work on consciousness and I don’t have a new theory of consciousness). In most of my talks and discussions, I try to avoid it because I don’t have anything definitive, novel, and actionable to say about it, other than it should be taken seriously in body organs for the same exact reasons we take it seriously in brains. I also don’t use it in almost any of my papers, which focus on specific 3rd-person observable competencies (behaviors with diverse degrees of intelligence), except one preprint with Nic Rouleau (https://osf.io/preprints/psyarxiv/va5mk, but OSF is mucking with the links so it seems to be unavailable right now).

      1. Tony Budding Avatar
        Tony Budding

        Thanks Mike. I know that is your formal stance, and you are consistent with it. What I’m referring to are not just the informal uses (the term conscious or consciousness appears 24 times in your post), but to the concept that it is a thing to take seriously. My point is that there is not any specific thing that is consciousness, but rather that the phenomena people are attempting to reference with the term are aggregations of simpler, more foundational phenomena.

        For example, is it absurd to say that your liver is conscious? You pointed out that the term absurd is poetry, but what if you replace conscious with capable of perceiving elements of its environment and responding to those perceptions based on previously established set points? Furthermore, can a liver adapt to pervasive changes in its environment? These shifts in terminology alter the conversation instantly.

        Another example, you wrote “The conscious agency of these systems is not usually available to the main, linguistic consciousness of the human body (the one who wrote the above question), but that doesn’t mean they are not conscious themselves.” We can change this to “The ability of these agential systems to perceive, respond and adapt is not usually available to the main, metacognitive analyses of the living human body (the one who wrote the above question), but that doesn’t mean they are not aware and capable of autonomous determined efforts themselves.”

        The reason living humans are capable of metacognitive analysis is because of the sophistication of our brains, not because of any vague sense of different or “higher” consciousness (higher is not your specific language but pervasive in the broader conversations). This is an example of the camera/light distinction. Awareness and determined efforts (will) are in the light category, while the genetic capabilities of the agential form are in the camera category.

        Again, this changes nothing else about your approach while removing ambiguities and introducing multiple independent levers that can be added to your experiments.

        1. Mike Levin Avatar
          Mike Levin

          > (the term conscious or consciousness appears 24 times in your post)

          ah you mean the Q&A post. Well, people keep asking about it, that’s why it’s there…

          > but to the concept that it is a thing to take seriously. My point is that there is not any specific thing that is consciousness, but rather that the phenomena people are attempting to reference with the term are aggregations of simpler, more foundational phenomena.

          that is certainly one position; I understand the idea.

          > what if you replace conscious with capable of perceiving elements of its environment and responding to those perceptions based on previously established set points? Furthermore, can a liver adapt to pervasive changes in its environment? These shifts in terminology alter the conversation instantly.

          right, agreed, but I think this is what Dennett called a “bait-and-switch”. Some scientists who want to understand consciousness will say that by swapping it out for perception and response you’ve moved to the field of behavior and physiology, and away from the “Hard problem of consciousness”. Some will say that’s because there is no such thing and all there is are publicly observable competencies. Others will argue that the reason the conversation was so effectively altered is that you shifted from a difficult topic to a much easier one, and that it’s not proven at all that this shift is valid.

          > Again, this changes nothing else about your approach while removing ambiguities and introducing multiple independent levers that can be added to your experiments.

          right, and we’re using those levers. All that kind of 3rd-person observable stuff (the competencies of sensing, decision-making, learning, navigating, exerting preferences, forming counterfactuals, problem-solving, etc. etc.) is precisely what we study and are publishing on – all valuable, important cognitive capabilities. That is what we do experiments on. It’s quite another claim (made in many books, but denied in many others) to say that this is all consciousness is, and nothing more. It’s a popular position, but showing it is another matter. I certainly don’t claim to have a definitive argument that is new in this ancient space where so many smart people have disagreed, so I mostly stay away from it (for now).

          1. Tony Budding Avatar
            Tony Budding

            Ah, ok. This makes sense now. From your perspective, all these points I’ve been making the past several years are indistinguishable from all the other vague theories of consciousness floating around. Fair enough!

            You wrote, “Specifically, whatever you do, after the fact, someone can decide to focus on the particle level and say “see, totally managed by physics”. I think you have to call these things in advance, not after someone else has done the interesting thing. The question for us all is: which conceptual framework facilitates you to do the interesting experiments in the first place.”

            Well, I have been calling these things “in advance,” so now it’s up to the experiments to validate or refute them, right? You are certainly using some of the levers now, but I have not seen you discuss the variability of awareness (confusion to clarity, biased to unbiased, etc.) or of determined efforts (ineffective to effective, powerful to weak, etc.) separate from a general learning process (I don’t read everything you publish, so I apologize if you have!). Granted, they are difficult to isolate but you seem to thrive on difficult challenges.

            Going further, I would not agree that I’m claiming that this is all that consciousness is. First, I am saying that there is no such thing as consciousness as an independent phenomenon (which again I get why you have to reject that conclusion). But more importantly, what I’m saying (calling in advance) is that these variables in awareness and determined effort are actually critical factors in life, learning, healing, surviving, developing and thriving. And as such, incorporating them into your experiments should be highly fruitful.

            I have never suggested you simply believe any of these ideas. My intent has always been to support your efforts through suggestions on where to look and what to avoid, which elements are fixed and which are variable. You are designing experiments all the time, many of which begin with conjecture. My hope is that these suggestions can help refine the conjecture stage to minimize dead ends.

            FWIW, there is another massive topic that affects all these efforts, which is the structural limitations of human cognition. All of science is intended to improve human understanding, and there is an implied assumption that we humans are capable of understanding everything once we develop the knowledge and technology needed. But this is not the case. For example, we cannot imagine what the absence of awareness is like because all imaginations require awareness.

            I know this sounds like shortsightedness or even heresy so I won’t attempt to explain here how I came to this conclusion (I do have hundreds of thousands of words written explaining it), but the reason I bring it up is because these limitations are major impediments to understanding intelligence, sentience, awareness, perceptions, responses, and determined efforts.

            Many of the questions associated with these phenomena are like asking what do gamma rays look like? They don’t look like anything for us because our eyes are incapable of perceiving them. So what we do is convert the patterns in gamma rays to patterns of visible forms so we can see them. Similarly, there are viable workarounds with the limitations in human cognition, but they require an acceptance of the finite nature of cognition. Again, I’m not trying to convince you, just calling it in advance.

            Mike, to be clear, I think what you’re doing is fantastic. It’s also incredibly challenging and daunting. My expertise overlaps with your world in just a few specific areas, so I’ve been offering suggestions. You make these posts and invite comments so I will continue to participate until you ask me to stop. I truly wish you all the best.

            1. Mike Levin Avatar
              Mike Levin

              > Ah, ok. This makes sense now. From your perspective, all these points I’ve been making the past several years are indistinguishable from all the other vague theories of consciousness floating around. Fair enough!

              no, I’m not saying that. You’ve sent a lot of material and unfortunately I have not had a chance to properly digest it all, but I’m sure there are valuable specifics there. What I meant was simply that re-framing the question of consciousness as a question about observable behaviors is something that needs strong justification and may or may not be a useful move.

              >Well, I have been calling these things “in advance,” so now it’s up to the experiments to validate or refute them, right? You are certainly using some of the levers now, but I have not seen you discuss the variability of awareness (confusion to clarity, biased to unbiased, etc.) or of determined efforts (ineffective to effective, powerful to weak, etc.) separate from a general learning process (I don’t read everything you publish, so I apologize if you have!). Granted, they are difficult to isolate but you seem to thrive on difficult challenges.

              I have not done those, as I’m not sure how, but I will certainly think about it.

              >what I’m saying (calling in advance) is that these variables in awareness and determined effort are actually critical factors in life, learning, healing, surviving, developing and thriving. And as such, incorporating them into your experiments should be highly fruitful.

              seems like a reasonable hypothesis. I will think on how to do that. It’s not clear to me how, or what specifically we can call in advance with these critical factors. I know it’s hard; it’s taken me decades to get to the point of testing *any* such things in practical contexts.

              >FWIW, there is another massive topic that affects all these efforts, which is the structural limitations of human cognition. All of science is intended to improve human understanding, and there is an implied assumption that we humans are capable of understanding everything once we develop the knowledge and technology needed. But this is not the case. For example, we cannot imagine what the absence of awareness is like because all imaginations require awareness.

              what does this imply we should do? I fully agree there will be limits on any agent’s ability to comprehend the world.

              > Mike, to be clear, I think what you’re doing is fantastic. It’s also incredibly challenging and daunting. My expertise overlaps with your world in just a few specific areas, so I’ve been offering suggestions. You make these posts and invite comments so I will continue to participate until you ask me to stop. I truly wish you all the best.

              I appreciate it!!

              1. Tony Budding Avatar
                Tony Budding

                >re-framing the question of consciousness as a question about observable behaviors is something that needs strong justification and may or may not be a useful move.

                I would describe this differently. First, I completely agree that any framing needs strong justification. I would say that there is no current framing of the question of consciousness that has clear definitions and strong justifications. My point is that in order to create a legitimate framing of the question, we should forego the term consciousness entirely and instead identify the component parts.

                For example, we know perceptions and responses happen, so what are the component parts? In perception, some data about the environment is acquired through a physical sense. Visible light hits the eye. Signals are sent to the brain for interpretation and interpolation. The brain somehow converts the signals into raw experiential or Platonic information in the mind, where it is compared to pre-existing set points or maps, which are forms of expectations. When there is a discrepancy between the freshly acquired data and the expectation, an urge to act arises with the agenda to reconcile the discrepancy one way or another. The details of this response are based on learned behaviors.

                In whatever current framing exists, which parts of these processes are consciousness? Would you say all of it? Some of it? Which some? How do you even get started trying to answer these questions?

                I’m suggesting that this is the wrong question. Better questions are, can there be perception and response without awareness and determined efforts? If not, what is awareness, where does it come from, who is aware, how do material sense data get converted into knowledge, where does the ability to exert effort come from, how is it activated, how much can it be deliberately influenced, and where does all this happen?

                Instantly we have multiple actionable approaches, whereas before we had nothing to even begin with. Now, the only parts of this that are independently observable are the acquisition of material data and actions of response by the physical body. Everything in between is unobservable and unmeasurable, which creates huge conundrums for how to validate cause and effect relationships.

                >seems like a reasonable hypothesis. I will think on how to do that. It’s not clear to me how, or what specifically we can call in advance with these critical factors. I know it’s hard; it’s taken me decades to get to the point of testing *any* such things in practical contexts.

                Yeah, it’s very tricky. The simpler the system, the less variation should be seen. Complexity is modular, so very slight variations across layers of interdependent modules add up to more obvious differences. Try looking for variations in learning speed, whether the learning is retained or has to be relearned over and over, and whether stasis only happens at full optimization or whether the system settles for good enough. I’m not sure what you’ll find, but it’s a starting point.

                >what does this imply we should do? I fully agree there will be limits on any agent’s ability to comprehend the world.

                We could say that the basis of science is know what you know, and know what you don’t know. Experiments are performed to validate the knowledge, which are published so others can replicate or refute the validity of the knowledge. If we accept that human cognition is finite (it is also perspectival and modular), then we have to include another category, which is know what can’t be known.

                For example, the typical human eye can perceive the visible ROYGBV colors in the center of a massive electromagnetic spectrum, most of which is outside the capabilities of the eye to perceive. No matter how great our vision may be, we simply cannot see X rays or gamma rays. No matter how smart we may be, we cannot know anything that is inconceivable through human cognition.

                This brings up a massive question of how can we know what can and can’t be known? This happens to be one of the main questions I’m addressing in my book, which requires about a quarter million words to do justice (over 50,000 words just to establish the framework in which the question can be addressed meaningfully).

                To make matters more complicated, knowledge is skill-based, so we have to learn how to differentiate what is knowable by me right now, and what can be knowable when I optimize my skills of knowledge. It’s like the question, can humans run a sub 4min mile? For a long time, the answer was no for everyone. Since Roger Bannister did it for the first time in May of 1954, nearly 2,000 men have done it also. No women yet, but I’d be surprised if we don’t see it soon. Now, can you run a sub-4 mile today? No. Can you personally train yourself to get there? Probably not (I believe we’re about the same age, so the answer is definitely not for both of us!).

                What do the Roger Bannisters of knowledge look like, and how are they different from the rest of the population? Answering this question is another key purpose of my book.

                Once we realize that we cannot directly see X rays and gamma rays, we stop trying to perceive them and turn to technology. Once we realize that certain phenomena fall outside the hard limitations of human cognition, we stop trying to know them directly and instead create workarounds to model them in a theoretical way that predicts what is knowable. The more accurate the predictions, the more valuable the theoretical model.

                Addressing and assessing the true boundaries of what is conceivable vs inconceivable is extremely tricky (another reason my book is so long). I can give you some examples, but without the full justification, there would be no way for you assess the validity of the examples. In the meantime, I have tried to present you with some specific suggestions that can be implemented today that don’t require you to explore the depths of the rabbit hole I live in.

  4. Emmaline Avatar
    Emmaline

    Hi! I’ve heard you bring Plato’s theory of form up a few times and I’m wondering if you have written about the implications of your research being similar with his theory? And in what ways? Thank you

    1. Mike Levin Avatar
      Mike Levin

      https://osf.io/preprints/psyarxiv/5g2xj_v3
      (where I point out that my goal is not to examine or support Plato’s and Pythagoras’ theory specifically, but only to link to the general idea and the way Platonist mathematicians use it).

  5. MICHAEL P. GUSEK Avatar
    MICHAEL P. GUSEK

    I’ve already mentioned your work to a couple of family offices that love new stuff like yours. If you’re interested in jamming on AI use cases for research, let me know.

    1. Mike Levin Avatar
      Mike Levin

      Thank you! Happy to discuss over email; we’ve been thinking a lot about use of AI for our purposes.

  6. Bill Seltzer Avatar
    Bill Seltzer

    Wow! Beautiful! and thanks for sharing your work. I never miss a lecture.

  7. Henry Volkmann Avatar
    Henry Volkmann

    In regards to life extension, one point I never hear discussed is that of the benefit of shorter lives–or at least not exceptionally long/immortal lives. Not for “meaning”, but rather the evolution of the society. It hinges on the assumption that decreased turnover of individuals is intimately tied to the subsequent stagnation of ideas.

    With the development of a new generation of scientists comes a breath of fresh ideas and perspectives. This new crop is essential to grapple with the emerging challenges of the world. You said it yourself above, you post videos and blogs to meet people before they get entrenched in dogma. So if we extend life, will we just get stuck & stagnate? Sure, the next engineering problem could be to induce “late-life” plasticity, but could we selectively reconfigure ‘parts’ of someones psyche? I find it difficult to imagine the ability to reconfigure one’s perspective sufficiently without catastrophically affecting the rest of their mind–such as losing their sense of self/identity (although, I am familiar with the benefits of the later, at least temporarily).

    If this generational rotation of ideas is baked in to society as a trait of a superorganism, would the advent of this technology be akin to cancer?

    I’d like to state that I’m not arguing to close off avenues of science due to potential fears of how it could turn out, but I do think considerations like this should be at the forefront of those discovering it. This barely begins to explore the immediate and long term effects of such a scientific breakthrough. I’m interested to hear your perspective as someone at that cusp.

    1. Mike Levin Avatar
      Mike Levin

      I get it. Let’s put aside the thorny issue of limiting people’s lives in favor of evolution of society. I would say,
      1) assuming that turnover due to death is important for advance of novel ideas, the question then is, how long should the lives be, at optimum? I don’t believe our current life span was set by a wise decider who wanted to optimize novelty and progress – I think it’s the outcome of evolutionary forces that don’t care anything about our values (which is not to deny spirituality or forces beyond natural selection – simply that I see no evidence of anything being optimized for the kinds of outcomes we are talking about). So. Might we need to limit lifespan (at least, of scientists) to 30, since most big advances are made by scientists in their 20’s (I think that’s still true? not sure, but you get the idea)? Maybe we need to kill them off before they stagnate in tenured positions and hold back the young turks? Or, perhaps the right age is 200 – not infinite, not 1000, but like, 200, so maybe a little life extension? I don’t see how you set a number on this, but I think the status quo is arbitrary so we don’t need to defend it.

      2) Hypothesis 1: life extension will only happen when we crank up regeneration fully;
      Hypothesis 2: when we crank up regeneration, minds (not just brains) will not stagnate. I can’t prove it now, but my strong suspicion is that the stagnation is the result of the degeneration of our embodiment. I think some people (perhaps not all) can maintain novelty indefinitely, at least to the point that they don’t stifle others (even if they themselves stop generating it at some point). I’d like to find out if this is true; I think it’d be criminal as a society never to find out the answer.

      3) Hypothesis 3: it might take more than ~80 years to reach the wisdom needed to really live a meaningful life. I know we’ve managed some of that in less, but how do we know it’s like caterpillars who kill themselves before finding out what they’re capable of? Maybe at 300 or so, you suddenly kick in to a new level of understanding and integration that makes our former sages look like children who don’t know there’s an adult phase? I’d like to find out.

      1. Zachary Collins Avatar
        Zachary Collins

        Doesn’t MPP suggest that evolution throughput is related to efficiency, without contradiction, through surprising relationships?

        What allowed me to get over my own fear of life extension or biological hacking was letting go of any privileged sense of quantification. My bet is that if something wants to live to 300 or 1000 years, it will be necessary to allow and preserve insects that live 72 hours.

        In the end, if we develop tools that allow things to explore their morphologies and lifetimes freely, those things will explore a diverse set of possibilities in practice. And no matter what goal you have, you may be surprised at your dependency on something that doesn’t.

    2. Albert Avatar
      Albert

      But why should we not let other people decide how long they want to live? He was also talking about raising IQ’s.
      If I had another chance at life I would personally do everything different.
      I think this is the most interesting topic I have come across it baffles me how this is not talked more about or more people focusing on this. I first came across the book of Robert Becker but he only used batteries then I came across Mike. I am also reading now Burr. But seems like even Robert Becker predicted kind of the future in his book when it comes to bioelectricity and healing.

      I think people should be allowed to decide how long they want to live.

  8. Benjamin L Avatar

    Regarding the unhelpfulness of saying “emergence”, you may find significant common ground with these two essays, “The Futility of Emergence” (https://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futility-of-emergence) and “Mysterious Answers to Mysterious Questions” (https://www.lesswrong.com/posts/6i3zToomS86oj9bS6/mysterious-answers-to-mysterious-questions).

    Regarding setpoints, the economy has setpoints engendered by the relative price system that functions with respect to the agents in the economy to maintain patterns of specialization and trade in the economy, and these setpoints can be rewritten to create the economic equivalent of two-headed flatworms. However, those set points are constantly evolving as economic conditions change, trending in no direction that any human has ever discerned. The economy does not have an adult form. Additionally, the economy does not die of old age. The lack of an adult form may be the reason why the economy does not age: the fact that the economy’s target morphology is constantly updating means the system never gets bored of maintaining itself. The job is never done.

    Highly speculative, but perhaps intriguing—an economic approach to longevity.

    1. Mike Levin Avatar
      Mike Levin

      Very interesting indeed. Thanks for the links, look relevant indeed, and for your linking of my ideas to economics – lots to think about!

  9. Benjamin L Avatar

    By the way, I really appreciate your efforts to export and import ideas across fields. I’ve reached out to a number of scientists about connections between economics and their research, but you were by far the most receptive.

    1. Mike Levin Avatar
      Mike Levin

      🙏🙏

  10. Mark Avatar
    Mark

    Hi Michael, big fan of your work.
    Something I’ve noticed while listening to your talks as well as your various podcast appearances is that, whenever you speak of medical interventions that could possibly be derived from your research, it is usually in terms of limb regeneration or curing various cancer.
    My question for you is, are these just taglines you happen to frequently use while doing outreach, or are these actually the kinds of issues you aim to “solve” first?
    In the case of the latter, wouldn’t something like scarless wound healing, a seemingly smaller goal that could, potentially, be achieved way sooner than regrowing a whole limb, be a better first milestone to aim for, especially when considering the vast number of patients, whose lives could be greatly improved by an intervention like this?

    1. Mike Levin Avatar
      Mike Levin

      I speak of those specifically because those are the areas where we have already demonstrated progress (we’ve been working on it for some years in animal model systems) and the areas we plan to address. The thing with wound healing is that 1) our bioelectric interventions specify organs, not wound healing, so I actually am not sure they are of any help in wound healing (it’s a totally different process than regeneration or organogenesis), and 2) tons of labs are already working on wound healing, that space is crowded with smart people and doesn’t need my lab.

      1. Mark Avatar
        Mark

        Thanks for answering, Michael. That makes sense.

        Another question — I heard you say that your lab has moved on from frog models to mammalian models. More specifically, to rats.
        I know you can’t comment on unpublished work, but could you perhaps give a guesstimate of when we might expect to see the first few papers being published on this? Is it a matter of a couple more years? 5? 10?

        1. Mike Levin Avatar
          Mike Levin

          I can’t give any dependable info, since a) this is science and we don’t know in advance what we will find, and b) the ecosystem of funding and talent is completely disrupted right now so no one knows which projects will be able to continue and which won’t. Having said all that, the first papers showing our approach (not necessarily showing impressive in vivo data, I can’t say anything about that right now) should be in 2026. Maybe something at the end of 2025, we’ll see.

  11. Bill Seltzer Avatar
    Bill Seltzer

    It seems likely to me that the breakthroughs being produced by you and your colleagues will produce changes in civilization as significant as the development of language.
    Consider life expectancy measured in terms of centuries, traumatic injury repair, cancer cured, birth defects reversed, aging diseases eliminated, the creation of cyborgs, bio bots and other novel life forms, and the elimination of much suffering.
    We are replacing Darwinian evolution in many ways to enhance our physical and mental abilities. One can argue that we are migrating from Homo Sapiens 1.0 to Homo Sapiens 2.0 at a speed measured not in millennium but in several decades, say 40 to 100 years.
    Some will object to such scientific advances. To paraphrase Max Planck: New ideas in science are accepted one funeral at a time. Most will react to century long lifespans by talking betters care of the planet and each other. Michael Levin will be known as the greatest change agent in history.

  12. Teo Avatar
    Teo

    Hello,

    Thank you for openly sharing your work. You have definitely helped transform the thinking and scientific worldview of many scientifically inclined people and aspiring scientists.

    I wanted to share a framework (https://vasily.cc/framework/
    ) in case you are not already familiar with it. I think it aligns very well with the science and philosophy you have developed. In general this blog contains many interesting ideas that i think resonate with concepts such as collective intelligence, goal-directed agents, and potential interventions in a system.

  13. Amanda Avatar
    Amanda

    Thank you Dr. Levin for all your work. Do you forsee a future where your research or the work of your labs will be used for gender reassignment for transgender individuals?

    1. Mike Levin Avatar
      Mike Levin

      I believe in freedom of embodiment. I think future generations will find it almost impossible to believe that we had to live our whole lives in whatever body we were given – with all of its limitations (disease susceptibilities, birth defects, cancer, etc. etc.), picked out for us by stupid random cosmic rays hitting egg cells over millions of years. Our future bodies will be intentional, and make use of the whole space of possible form and function.

  14. Todd Luger Avatar

    I’ve recently been learning about yogacara Buddhist philosophy. One of the ideas in this philosophy is that there are five universal mental factors that are occurring in each moment of human consciousness. They’re a wide range of other mental factors or processes that take place in specific situations, but these five are universal to all conscious experience in humans. These five factors are sense contact, attention, sensation, perception, and volition. Sense contact refers to the initial stimulus of a sound for example, which may or may not capture your attention, at which point you judge that the sound is either pleasant, neutral, or unpleasant, and label the sound if known or just classify it as unknown, and then feel the impulse to act in regard to the sound or not.

    At this most basic level of consciousness, two things occur to me. First is that most if not all animals other than humans probably don’t experience perception at least not in the way that humans do where we actually put a word to a thing. Perhaps there is some analog going on in animals or perhaps certain higher animals, but I don’t think we have any compelling evidence of that. So, if one considers just the four other universal mental factors, it seems to me that any organ or cell in the body meets this threshold. To continue with the example raised of the liver, it certainly responds to stimulus with something like attention followed by a judgment as to whether the sensation is good batter neutral, which would require different responses or no response at all. You can argue that there’s no consciousness involved here and it is just instinct or impulse. So, it is inaccurate to use words like attention, judgment, or volition. However, according to yogacara (and Buddhist philosophy in general), these universal mental factors are also occurring in humans without the intervention of so-called consciousness.

    Even the decision to act is not considered something made by an executive but rather something that becomes available to consciousness at some point after the decision has already been made and often after the action has been executed. There’s considerable research to show that much of how humans describe how they made a decision about something and acted on it is actually ex post facto to the physiological and neurological processes involved.

  15. Cate Avatar
    Cate

    Hi : )
    Thank you for answering so many questions. English isnt my first language and I am not a scientist (so I don’t have the scientific language, even less so in English) but I hope you can bear with me as I think this is crucial to help alleviate suffering (which I don’t believe is of God but completely man made).
    My question is what field do you believe one should or could go in that will be supporting human recovery or health in the future, (knowing mental illnesses and physical illnesses are absolutely connected. That’s my stance, not yours but I explain below)?
    I am a RSW and I myself almost died of severe chronic illnesses after being given an antibiotic (I have almost fully healed from each condition). My body had developed over 40 illnesses almost overnight (I saw blue and green lights in my NS (super scary), I was in a wheelchair, couldnt digest food, my tissues had stopped holding me (not surprisingly, my electrolytes were all messed up…), I had brain demyelination, hypoxia, issues with mast cells, was showing symptoms of Lupus (low WBC, Anemia, pain and swelling) etc. and my organs had started to not work properly, has developed heart issues and diabetes. Quite the nightmare. I am not making this up, I have all my medical file, it’s extremely rare though and I get when people are confused but I know now that the bioelectrical field was completely messed up (with electrolytes proof) and it literally made the full body stop being in coherence..!
    I AM fully healing though as I have had a PROFOUND awakening on the nature of reality and understood that we are not these physical bodies and that mind is all there is.
    Not wanting your stance on that (unless you want to give it).

    I am now looking at completing a M.Sc. in Psychology, Consciousness and Neuroscience (or something similar), as normal psychology still believes the body to be physical, which I now know for sure, isn’t the truth.
    I am also wondering if you see the place of social work annywhere in healing and if so, what do you think would be a good thing to study in that field?

    I think (strongly believe) a big part (most likely all) of our western diseases are diseases of the mind, and they sometimes seem to be triggered by CNS injuries (there is an obvious link/feedback loop between them). They also are most definitely passed down as a pattern from parents to children (as if the perfect form we should come from has too much mind interference from parents with their own mind problems and it disrupts the children’s NS (and the bioelectrical field) who are born with birth defects or seizure disorders or cancers, etc. One thing I have noticed about that are the colicky and jaundice baby at birth happens because the NS is disrupted and children can’t digest and the liver doesn’t eliminate bile and it’s fully related to the ANS. Of course the kids don’t do it to themselves (they ‘do’ but don’t know) as they are connected to the ANS/Mind/Electrical field of another being (mom)). I believe the bioelectrical field is messed up which affects the coherence of the systems). know your work is specifically on cancer for the moment, but cancer and diabetes and hEDS and POTS and autoimmune diseases and seizure disorders, etc. all go hand in hand and people who develop one of those often shows symptoms of others or (like in my body’s case), it happens very quickly after exposure to something very disruptive to the NS/Mind.
    You can find people online who are healing, like my body have, from many many of those (20+) in the same person, after literally doing ‘mind change’/somatics (SE) (also what the Bible says to do (renew your mind) to be walking in divine health without ‘demons’ in the Kindgom of Heaven that is within and not without… food for thoughts I believe they knew what we forgot completely eliminating anything non physical from our science).
    Anyways, I want to help others to 1) not get to that point in the first place 2) avoid children to be affected by their parent’s NS/Mind (I am convinced this is what happens) so they can be born healthy and thriving 3) come back from it if necessary and heal (which often/almost always involved healing traumas (I believe from their destructive nature on the NS and bioelectrical fields).
    I am just not sure what the best route academically so this can be taken seriously. I know what I am saying makes a lot of sense but to a lot of people in medicine, it doesn’t make sense as they are still focused on a physical reality and not what we actually are (the software).

  16. Cate Avatar
    Cate

    I forgot to mention above:

    I also want to mention the work of Robert Naviaux, PhD (Naviaux Lab) with UCSD, who has linked all of these conditions (I have obviously linked them all because they all appeared suddenly and all together in my body so it was pretty obvious to me despite not being obvious to medicine), with something he called the Cell Danger Response, here: https://www.sciencedirect.com/science/article/pii/S1567724919302922

    You can find more recent stuff on Naviaux and their researches on the website https://naviauxlab.ucsd.edu/.

    I think both your work is closely related.
    Thanks again so much for what you are doing. It is so important.

    Blessings!

  17. Dan Avatar
    Dan

    Is the amplituhedron the Platonic object that informs/instantiates our particular universe and its physics?

  18. Lisa Rogers Avatar
    Lisa Rogers

    I am so inspired by you and your team. The implications and applications of your work are profound and vast. From the first time I heard you on a podcast, I saw clearly in my mind how, with a little extrapolation, it could be the foundation of a new operating system for civilization. It’s that powerful of a paradigm shift. I believe you embrace interdisciplinary sharing within the sciences and philosophy, for example, but do you also have conversations with other types of experts in fields like economics, ethics, law, political science, education, sociology, ethology, biosemiotics, arts, horticulture, linguistics, communications, etc. I would certainly be interested in hearing those conversations as well.

    1. Mike Levin Avatar
      Mike Levin

      Thank you. I have some of those conversations, and record a few of them. For example here are a few related to economics – check out https://www.youtube.com/@drmichaellevin/search?query=lyons. But my main job is making progress, so I don’t tend to do a lot of conversations unless I think that they will move something forward (i.e., I have something new to contribute to their field, or they have something I can use to move my work forward). There are enough people just having conversations; I have a specific remit and very limited time here. Sometimes it broadens out to other disciplines but I only do it when I think it can help us move the needle in a practical way. Of course sometimes I get surprised! In any case, yes I do see relevance to all those areas, but I only roll things out once they’re cooked to a certain level.

  19. Lisa Rogers Avatar
    Lisa Rogers

    When writers or journalists engage your work for broader audiences, how do you decide when that kind of translation is useful versus premature?

    1. Mike Levin Avatar
      Mike Levin

      Well, when writers or journalists want to write about it, we don’t have a choice – they can write whatever they want, premature or not. The challenge comes when people send me “Here’s my write-up of your work, please check it” pieces. On the one hand, I definitely don’t have time to read and correct all that text. On the other hand, if I don’t, stuff gets put out there that’s wrong. I have my favorite science writers who do a great job, but there is just too much out there to try to micromanage. It’s an art, there’s no real way to know how to do it optimally. I basically follow the heuristic: if people want to know what I think, they can read my writing directly. Anything else, I take no responsibility for. The other important thing is to decide what is ready and what’s premature for me to talk about in public. That too is an art form; I try to balance the need to push the envelope with the policy that I don’t talk about things until they are practically impactful (can be addressed in the lab, or in some other way be of practical use). The goal is to eventually die without sitting on too many worthwhile things left unsaid, but without having put too much out there that is speculation that no one can make use of. Some kind of balance…

Leave a Reply to Teo Cancel reply

Your email address will not be published. Required fields are marked *