The following is another set of questions (in no particular order or organization) that I’ve been asked after talks, in emails, on Twitter, etc. and my attempts to answer (some of the most common are here). I saved these because they are either interesting or because they come up a lot (or both).
Do I use AI such as language models?
Very rarely. I’m not against its use, but I haven’t found a good use case for it besides entertainment. I sometimes make images with Midjourney, and I occasionally ask GPT or Claude to come up with an acronym or something. Besides such creative things, I don’t think it’s reliable enough yet to use it for serious work (most of what it writes requires a lot of editing from me and fact-checking, which sort of defeats the purpose). I would be interested in use cases where it might be reliable enough to actually save me time.
Did I invent bioelectricity? Did everything in bioelectricity get invented in the last century, perhaps by Robert Becker?
I certainly did not invent the idea of non-neural bioelectricity – serious investigations by brilliant and indefatigable people date back to Galvani, Volta, Emil du Bois-Reymond’s work on wound currents in the late 1800’s, and a ton of work in the 1920’s-1990’s. See for example my write-up on the Godfather of bioelectricity, H. S. Burr, or an interview with Rich Nuccitelli, a truly great contributor to this field who spans the classic and modern eras. A lot of the old work is referenced here and especially on p. 298 here. My lab and work owe these pioneers a lot.
However, as in all science, the early work did not answer (or even identify) all the key questions nor push into all the possible applications. What is new, since our work (starting in ~1998 or so) is:
1) First molecular tools to read and write bioelectrical states of non-neural tissues in morphogenesis and connect these processes to downstream gene expression. This enabled bioelectricity work to finally be connected to the on-going advances in molecular genetics – important for bringing the field into the mainstream.
2) My idea that somatic bioelectricity is a kind of cognitive glue, just like bioelectricity in the brain; the idea that cell groups form a collective intelligence that navigates anatomical morphospace as problem-solving, and that bioelectricity is the medium for its memories and computations, and the interface by which we can re-write these memories and guide the living system toward desired outcomes. In other words, that bioelectricity isn’t just another piece of physics that we need to keep track of in development or regeneration, but a uniquely tractable entrypoint to the cellular collective intelligence.
3) A focus on patterns of resting potential (Vmem) (and not electric fields), and methods to understand how spatial distributions of membrane voltage can encode organ-level information, showing that it can be used to trigger subroutines that build entire organs (e.g., this).
So, absolutely the idea of bioelectricity has been around for a very long time and people like Robert Becker inspired me and my lab. But the field is in a very different place now, and neither the link to molecular biology and behavioral sciences, nor its advances in biomedicine of birth defects, regeneration, cancer, and synthetic bioengineering, had been made by prior workers.
Isn’t everything you talk about a kind of computation?
I’m a (former) software developer too, so I get it – I love coding. But it’s not so simple. I am currently writing a paper called “Booting up the agent” which is all about this earliest process, comparing (and contrasting) to what happens in a traditional computer, in the nanoseconds/microseconds when it first changes from a hunk of metals obeying the laws of physics to a computational device obeying an “algorithm”. It’s a deep question as to how living systems boot up, and in general I’ve talked about a lot of the differences between traditional computers and the agential material of life, and thus the places where standard computer metaphors fail us. For example, living things have to uncompress (with creative improvisation) and decode their own memories, via polycomputing, and to construct their own hardware as it goes along – it’s not like a Turing machine where the data and the machine are cleanly separated.
Two recent papers have critiqued the “agential stance” (DiFrisco J, Gawne R. Biological agency: a concept without a research program. J Evol Biol. 2025 Feb 1;38(2):143-156. doi: 10.1093/jeb/voae153. PMID: 39658090, and
Potter, H. D., & Mitchell, K. J. (2024). A critique of the agential stance in development and evolution. In The riddle of Organismal Agency (pp. 131-149)). Both argue that loose ascription of agency is neither warranted nor helpful and that more general systems principles (the products of which are iteratively subjected to the verdict of natural selection) are up to the task of explaining the phenomena that may be prompting the agential stance. What do you think about that?
This comes up a lot. In all of my relevant papers (for example, here), I say explicitly that loose ascription of agency (i.e., treating agency as a linguistic or philosophical matter that can be brought in according to one’s preferences) is indeed inappropriate. My point is that we now have rigorous ways to determine precisely when ascriptions of agency (i.e., porting of tools from cybernetic and behavioral sciences) is appropriate: if and only if it facilitates new discoveries and new capabilities at the bench. The DiFrisco paper makes an empirically incorrect claim about a lack of research agenda. These ideas not only have a research program, but the research program has been paying off very well, in terms of new discoveries heading to the clinic and of new capabilities in bioengineering. The key is that loose ascriptions are indeed unhelpful, but that is not what we are doing. What we are doing is specifically showing, through examples of new discoveries (not simply “explaining” things that had been already done by someone) that this stance leads to finding new biology and new capabilities.
Potter and Mitchell argue that the kinds of evidence often put forward in literature, focusing exclusively on cases where embryonic regulation or adaptation to challenges is successful in achieving a viable outcome, gives a mistaken, seductive impression of morphogenesis as performed by active, problem-solving agents. But this represents both an ascertainment bias and a confirmation bias. Wouldn’t a more comprehensive survey of the ways in which development often goes wrong, or fails to regulate, or even the many cases where plasticity responses are themselves maladaptive, undercut this impression?
This is an interesting question. Indeed the failures of development are as (or often more) informative than the successes – we don’t think they undercut the point at all. The limitations of a cognitive system as it navigates a problem space says much about its level on the spectrum of agency and its properties, like for example this classic popular book did for neuroscience (in any case, the study of disorders is one of the most tried and true strategies for making progress in neuroscience and medicine). For example, it is critical that embryos can make mistakes – a clear sign of a goal-seeking system. I am writing a separate paper on the errors of morphogenesis and how cognitive pathologies give helpful insights into the etiology (and repair interventions!) of birth defects, as a companion to this paper which describes morphogenetic competencies as successful instances of intelligence
Can’t all the examples you give in your talks be accommodated by standard views of genetics and emergent complexity?
Everything hangs on “accommodated”. Yes, the discoveries that have been facilitated by these ideas can, after the fact, be accommodated by general systems principles, or indeed, can be accommodated by physics (in the minimal sense that they are not inconsistent with them). Systems principles are not wrong, and the discoveries did not find magic. But the notion that, therefore, these principles are sufficient and talk of agency should be eliminated, does not follow from this for two reasons.  It was not those principles that led to these discoveries – they did not predict them (in fact they blocked them; for example, 2-headed planaria were first described in 1903, but no one bothered to recut them until 2008 because it was considered obvious that they would go back to the genetic default of 1-headedness in the absence of genomic editing).  For us, the question is not, can you accommodate discoveries after the fact, but does a particular set of conceptual tools (and the technology they imply – such as bioelectric reading of prepattern memories outside the brain, first developed in 2002) uniquely facilitate new discoveries more than status quo. And of course, any complex instance of human cognition can be said to be “accommodated by physics”, and any standard instance of molecular biology explanation can be said to be “nothing more than quantum foam doing what quantum foam does”. The problem with such statements is that they are not wrong, but they are sterile for next steps of discovery.  They simply don’t move science forward.
In your work on diverse intelligence, are you talking about real cognition/intelligence, or only “as-if”?
Let’s assume that genuine agency actually exists – in brains, and has a mechanistic basis. Beyond this, one position that might be taken is: outside brains, we don’t see any agential phenomena only robustness and attractor phenomena and everything else is “as-if”. One simple answer is that the kinds of problem-solving behavior we see in morphogenesis is underwritten by evolutionarily-homologous mechanisms to the way it’s done in brains: bioelectricity. Taking cognitive science and evolution seriously, why would one say that one is genuine and the other is merely as-if? If neural ion channels, gap junctions, neurotransmitters, etc. run algorithms for measuring and reducing error, storing memories, etc., in the brain, then one needs a solid argument for why the exact same phenomena, mediated by the same molecular mechanisms, do not count as “real”. I have yet to come across any convincing principled argument for why agency is real in the brain but as-if elsewhere, other than a desire to keep to pre-scientific colloquial usage.
Also, I think an “as-if” stance overall is dangerous because it implies its alternative: “real”, suggesting that some other concepts (pathways, attractors, etc.) represent a true, un-distorted view of reality. I don’t believe that as scientists, we have access to any absolute ground truth and that all our formal models are metaphors, somewhere on a continuum of “how well do they enable new discovery and effective applications”, capturing some but necessarily not all features of the world with metaphors of different degrees of fecundity and insight. In other words, a binary view of the “real vs. as-if” distinction is unhelpful here as it is almost everywhere else.
Your example of kidney tubule formation, despite an induced radically different cell size, in newts does not strike me as remarkable – I think many endothelia in many species (including blood vessels in vertebrates and the tracheal system in Drosophila) use a strategy whereby large diameter vessels are wrapped by multiple cells, and smaller ones by single cells (which bend more in the process). So this example doesn’t seem especially notable, as something similar already happens in many species.
This is true, and I am not claiming that newts found a solution that no other biological system has found. What we are saying is that they can find and deploy a solution that is not a normal part of their developmental repertoire (at least, in the kidney tubule), which reveals a problem-solving capacity to deploy novel behaviors in an appropriate, context-specific, but not hardwired manner. This non-magical but remarkable process is common in behavioral problem-solving examples of intelligence testing and doesn’t require a truly novel set of behavioral components, but appropriate and effective improvisation which can include mechanisms used elsewhere.
The notion of cognitive glue, as necessary to bind cells toward a specified target morphology, assumes that final morphology is a predefined target that a collective can orient towards. Isn’t morphology merely emergent? Classical models of self-organization, such as Turing instabilities or other pattern formation models, do not have a “set point” but the stability of the fixed point is an emergent property of the interactions of the elements of the system, emerging from the dynamical interactions among them.
It is true that Turing (and other reaction-diffusion) mechanisms do not depend on explicitly minimizing distance from a setpoint. Indeed, some morphogenetic events (e.g., wound healing) can be usefully explained as goal-less, pure feed-forward emergence (purely open-loop, like cellular automata or reaction-diffusion systems), but some cannot. It is important to remember that the idea of morphogenesis as the result of a process with 0 feedback (no setpoint homeostasis) is just a hypothesis, and a priori not more plausible (especially given the importance of feedback in biology) than models in which there are one or more levels of homeostasis, and thus setpoints (a.k.a., simple goals). Moreover, some capabilities (specifically, many of interest to current biomedicine) are not facilitated by a 0-feedback model because they require solving an extremely hard inverse problem between genetic information and system-level outcomes of anatomy and physiology. In many of my talks and papers, I show that some morphogenetic events do have an encoded setpoint because we and others have succeeded in re-writing it (which is the practical criterion for being able to say convincingly that a system has a goal).
Doesn’t the notion of a stable morphogenetic goal conflict with stable defective outcomes? Does this mean that morphologies that deviate from a supposed “goal” could not be constituted by cells in metabolic/physiological equilibrium?
Indeed, some “malformations” are themselves stable states (like our double-headed flatworms, which the re-cutting experiments of 10+ years of students in undergraduate classes have confirmed). It is crucial that stable states are re-writable, which means that future processes will reduce error toward a different outcome than before – they are often not hardwired. My claim is not that there is some universal way to say what pattern is “correct” and what is “malformed” – simply that some processes in morphogenesis use corrective feedback to achieve a specific outcome.
Why do you do all the outreach?
           My goal is to: a) foster Open Science (information available to all who want it), b) reach young people early in the STEM pipeline, to hopefully impact their thinking before their conceptual frameworks calcify and their commitments make it hard to change course, and c) reach experts in other fields who may bring us valuable insights and new tools from other disciplines (foster trans-disciplinary efforts). It should be noted that I actually don’t do much extra work for the outreach – mostly all I do is hit Record when I give Zoom talks or have (some) meetings with colleagues which I’d be doing regardless, and put them directly on-line. I don’t edit or do anything else to improve popularity, I don’t have sponsors, I don’t charge for it. I’m not interested in optimizing the numbers of generic viewers or followers – I want the material out there and beyond that, I just assume that those people, who have the most ability to do good with it, will find it. The two exceptions are: a) my blog (where I write things that don’t fit into standard academic papers, and things that are my personal opinion and not an official stance of my lab), and b) interviews I give on podcasts and such, which I use to practice: each time I do one, it’s an opportunity to hone the way I present information to make it clearer and more convincing.
Can’t other conceptual interpretations of your work be given besides the ones you emphasize?
Yes, as with everything in science, multiple interpretations can exist. Of course after someone does something, there are (infinitely) many interpretations. Specifically, whatever you do, after the fact, someone can decide to focus on the particle level and say “see, totally managed by physics”. I think you have to call these things in advance, not after someone else has done the interesting thing. The question for us all is: which conceptual framework facilitates you to do the interesting experiments in the first place. My specific claim is that all the new stuff we’ve discovered over the last 30 years is new (i.e., hadn’t been done before) only because we have a framework that is especially fertile for a large domain of new findings. Â Time will tell. No need to argue – everyone grab your favorite conceptual framework and see what cool things it lets you discover.
Why don’t you drop all this philosophical stuff, the experiments are good, just do those.
Impossible; the reason we did those experiments (and others hadn’t) is that one’s conceptual framework makes certain things invisible/un-askable and suggests looking at other things. Every framing is showing you something and hiding other directions. My only claim is that my framework (being updated constantly, not frozen) has shown value in pointing us to new discoveries and new capabilities (empirical findings, some already heading toward the clinic), and suggests a massive research program for on-going and future work. I am not saying it’s “correct” (whatever that could mean) or that I or someone else won’t find a much better one soon (I hope so!).
Do you claim to have established consciousness in Xenobots and such?
I haven’t made any strong claims about consciousness yet at all, in any system, nor pushed a particular theory of consciousness. Nor (as of August 2025) have I made any cognitive claims about any of our biobots (stay tuned, the data are coming). All my strong claims have been about publicly observable functional capabilities – cognition, intelligence, problem-solving, learning. I have begun to say a few things about consciousness in general, for example here.
Convincing patterns in eye vs. gut cells battle: does the “cellular debate” you described happens to coincide with any existing consensus algorithm you know of, or is it fundamentally different?
           Good question!! We’re not sure. What we do know is that it doesn’t go by majority vote. Sometimes a tiny piece of tissue convinces an entire organism to do something totally weird. We’re even toying with the idea that it might be novelty (like, “here’s an idea we’ve not had before, let’s do this!”) that is what sometimes puts it over the top. We’re working on it, but we don’t know exactly. It’s a very important problem for biomedicine because the name of the game there will be: find the most minimal, but most convincing, intervention for the target tissue.
It’s absurd to say your liver is conscious!
“Absurd” is ok as poetry – yes, it certainly seems absurd given how baked in the parochial views of other minds are. Like a lot of art, it calls for us to examine ourselves to see why it seems so absurd, and perhaps shift our priors. But what that statement isn’t, is science, engineering, or even philosophy. In order to be any of that, someone would have to not just *claim* it’s absurd (as the implications of relativity, quantum mechanics, and a lot of math seem, to our classical intuitions), but actually do the hard work and clarify *why* networks of other kinds of cells (with their ion channels, electrical synapses, neurotransmitters, microtubules, etc. etc.) can’t possibly do some degree of what the neurons in your brain do. Good luck. “Absurd” makes it seem like, of course there’s a convincing consensus story explaining it. But there isn’t. And more importantly, they’d have to show what practical benefit comes from their view – produce something useful, interesting, etc. – show the fertility of this binary way of thinking. I’ve yet to see either. In contrast, I’ve summarized many times the empirical benefits of investigating the continuum view. So do poetry if you want, but let no one confuse it for a scientific comment on a scientific claim, or a useful engineering pointer.
Anyway, my position isn’t that we know our organs to be conscious (any more than we *know* each other to be conscious). My position is that for the exact same reasons people currently attribute consciousness to brainy animals (and I list those reasons here), we should take seriously (i.e., try to shoot down in a principled way – not by fiat) the possibility that other kinds of meat can be a conduit for consciousness too.
Any “goals” a body organ may have are limited to that body.
Yes, probably. I didn’t claim you’d be having discussions with your liver about the financial markets or the latest film you saw. It cares about things happening in physiological state space of itself and its neighboring organs, and with new tools (under development now!) we will, hopefully, get a glimpse of its world and communicate with it about the things it cares about. Btw if it didn’t have both care/intent and a degree of intelligence to meet its goals under challenging circumstances, we’d be dead in short order. In any case, all cognitive beings are limited in how much of the world they can take in.
Why do you argue with people online? Or, conversely, why don’t you argue, when someone obviously has your view wrong or is critiquing it in an invalid manner?
I mostly don’t engage because it’s an infinite task – it would eat up all my time. All I can do is try to be as clear as possible in what I personally say, and let others figure out what they want to go with. When I do reply, it’s almost never about convincing the original person arguing – it’s to clarify my position on points that others may be thinking as well. I’m currently experimenting with having Comments open on our Youtube channel (although I never read them or have time to reply to them). I think the jury is still out as to whether it’s helpful to allow them or not. But the comments on this blog tend to be actually quite high quality – I usually respond to those.
‘Emergent’, in my understanding describes, not something unexplainable, never seen before, but rather a behavior that “emerges” from the interaction of multiple “lower level” elements that none of those elements is able to support by themselves. Scaling often leads to emergence.
Letting slide for a moment the fact that this definition uses the very word it’s trying to explain, the problem is that this definition basically covers pretty much everything. The total number of degrees of the angle of a square – emergent? The conclusion of a valid proof – emergent? The sum of a converging series? The derivative of a moving object’s position? The integral under a curve? The sweater made from string (and its knots)? The parity (odd or even) of a string of digits? All emergent? Or are some things not emergent? What’s the decision procedure? I just don’t see anything it enables us to do. I’m still trying to understand what the word adds to our practical understanding or capability, whether it’s a natural kind or a temporary label, whether it’s relative to an observer’s state of knowledge or surprise, etc. I don’t think anything that flimsy can be used to support the thing some people want it to support (sharp distinct categories separating “real minds” from “dumb machines”), which is the only reason we’re talking about emergence – I don’t actually care about it one way or the other, except that people think it carries some sort of oomph with respect to propping up binary categories or as an alternative to an ordered space of patterns to be investigated. That’s the only reason I’m pointing out its failings – it’s fine, right until you try to use it for anything important or new. Maybe it even has some uses somewhere, I am not claiming it’s impossible to find a use for it, I guess; just that it doesn’t do the job it’s often asked to do.
Why look for life extension? Our short lives (and our impending death) is what gives us meaning.
I could almost buy the importance of short lives if we all had the chance to intentionally curate exactly what (and whom) we wanted to appreciate, experience, and love in the short time period available to us. But most of us don’t – huge numbers of beings world-wide spend their short lives without that opportunity; they get the short life, but not the benefit of being able to be choosy and thoughtful about how to spend it. So I would rather say f-u to the arbitrary limits set by frozen evolutionary accidents, cosmic rays, viruses, and selection pressures that don’t care anything about love, beauty, or meaning. Maybe we can do better; at the very least, we can try. And it’s not at all obvious to me that lives that were however long you wanted them to be, not how long your telomeres or your blurring bioelectric patterns wanted them to be, would have any less meaning. In fact, maybe it takes longer than ~80 years to gain the wisdom needed to really have an agential life. Evolution doesn’t care about any of that and there’s no reason to think that our current lifespan is remotely sufficient to truly experience what we are capable of in terms of creativity, wisdom, and compassion. (And btw, if we think ~80 year limit is great for making each moment count, then heck why not reduce it 20? or 10? then every moment would really count!)
But even water can solve a maze!
Right, aspects of physics and materials properties underlie intelligence (as is well-understood and exploited by the field of morphological computing and others). Is the claim in that video that “real intelligence” has to mean there’s no physics underneath? Or that when we figure out the physical mechanism, a problem-solving competency ceases to be a kind of intelligence and becomes “just physics” – intelligence is whatever we can’t find any explanation for? Or that all intelligence requires the agent to know what problem it’s solving (as the voiceover mentions slime mold not having) – there’s no intelligence below reflective metacognition? Like most such pseudoproblems, the confusion in the argument in that video is caused by assuming a binary “is/is-not intelligent” (and also not specifying what “real” intelligence would mean exactly, and how it scales up from the activities of materials like water that make up our brains and bodies). I propose a simple engineering perspective on this issue: every material component has certain competencies I can count on when building with it. What can I count on water to do? Like many materials, I can count on it to minimize certain quantities in a context-sensitive manner (go down a gradient), and perhaps other things that need to be discovered by experiment (not assumed from a philosophical armchair – see Gerald Pollack’s work). It’s likely on the way left of the spectrum of persuadability, but it’s not 0!! I know it’s not 0 because I can depend on it as an engineer, and it saves me time and effort not having to micromanage it. I can count on Physarum to do more, and apparently I can count on chemical networks and materials to learn (see our papers on training GRNs, Walter Fontana’s papers on probabilistic inference by chemical circuits, and a recent review about learning in materials). And it’s exactly from those kinds of minimal capabilities that more complex kinds of intelligence are built up. Where else would they come from?! No big gotcha or paradox here, just need to leave binary thinking behind and take an empirical, practical approach. Cognitive claims are interaction protocol claims. We’ve known for >100 years now that some materials exhibit properties that, when treated with the tools and concepts of behavioral science, provide more empirical tractability than is afforded by not doing so, in deference to ancient categories propping up university department boundaries. All of this has been discussed in numerous reviews in the diverse intelligence field but still causes a kind of pearl-clutching when it is suggested.
I feel guilty about doing experiments with planaria and other small creatures…
yes; I feel guilty about the small creatures as well, absolutely. But I feel more guilty about letting down the human patients who email me every day in the most horrific medical suffering who ask, “wtf is taking you all so long, figure this out!” That’s one of the tragedies of this world, no one has their hand off the trolley lever. Doing nothing because you feel guilty about scientific research is just shifting victims. Now, I’m not mocking the view that says “I see the suffering, and I consciously decide that I’m not smart enough to adjudicate the morality tradeoffs so I will refrain from action”; I can respect that view (assuming it’s coming from a vegan who spends most of their time fighting against factory farming etc.). But almost no one really has that view (because even as people complain about Xenobots and such, they uniformly run to the hospital when their kids get sick, and pray like hell that someone has figured out something). It’s 99.9% lack of knowledge/imagination about where medical treatments come from and what you would do if you or someone you loved had a treatable problem, and 0.1% experience-informed deliberate decisions. I’m just estimating the percentages here, I really don’t know…
Machines, like computers, can’t produce consciousness.
I agree, computation – as in, our formal model of algorithms – cannot produce consciousness (“understanding” is not synonymous, so we should decide which thing we’re talking about, and perhaps even specify how biochemical brains get around these limits). But, I think not even simple “machines” are well-captured by those formal models. Let’s not mistake the model, and its limitations, for the real thing. More soon, but I cover this a bit here and here). Also I don’t think anything we do – synthetic or biological – “produces” consciousness – what we do is make interfaces that facilitate the ingression of consciousness into the physical world).
The stuff you talk about, such as collective intelligence, has been mentioned in many ancient texts.
Yes absolutely, some of these are very old concepts . Btw my favorite classic version of this idea is that of “group karma” – very close to collective intelligence (CI) in its approach to composite agency. But to clarify: like many very old concepts, the useful thing is not to “merely mention” them, but to find actionable ways that they advance research. What I did (that’s relevant here) is to formalize how collective intelligence can be used to understand, predict, and control morphogenesis as problem-solving behavior in anatomical space, and my lab created tools and used them to test specific hypotheses of biophysical mechanisms by which the goal states of biological CI’s can scale up or down. There’s also some other stuff about seeing molecular pathways as CI’s that can be trained, which is being developed for drug conditioning applications etc. These things unlocked a number of new paths forward in regeneration, embryogenesis, and cancer suppression and enabled a roadmap, pursued by people in my group and others’, of development of new interventions (communication with that collective intelligence) that are heading towards the clinic (also bioengineering). It’s important to keep up with the primary literature, to see the practical outcomes of specific concepts and what is or is not moving beyond being merely mentioned.
I disagree with your emphasis on compassion toward novel kinds of creatures and minimal agents. We should be focusing on humans and such.
I agree with this – we have no argument. I say exactly the same thing – “our capacity to deal ethically with other minds we KNOW to exist is very low” – to people who email me about the ethical urgency of figuring out the status of Xenobots etc. It’s not because once we know they can suffer, we will suddenly behave better; factory farming of pigs etc., and human history with each other, tell you that doesn’t just happen. However, I do think it’s critical to advance the study of diverse minds, for the following reason. And maybe it won’t work on the current generation, but it’s important for the forthcoming ones. If we get a mature theory of mind going – one not limited by the blinders we have on now – then maybe, just maybe, it will be harder and harder for us to deploy that in-group/out-group dynamic that humans love to set up. “They are not quite like us, so we don’t need to treat them with compassion”. Imagine – if the kids of the future understand the full range of our cognitive kin, won’t it seem like a much more obvious lift to be nicer to things that, comparatively, are very much like us? That is my hope. I have no idea if it will happen, but in my more optimistic moments, I think that a scientifically-grounded theory of diverse intelligence will, in the long run, make it harder for people to do the mental gymnastics required to treat others as fundamentally incapable of real suffering like “us”. Let’s expand the spectrum radically (to the level that science supports, not fantasy), and the distance between us will shrink exponentially. The differences we get worked up about today will be laughable to mature humanity in the future.
Autonomic processes like cardiac rhythm or digestion perform intricate, life-critical functions without any conscious agency.
           Well, you don’t know that. The conscious agency of these systems is not usually available to the main, linguistic consciousness of the human body (the one who wrote the above question), but that doesn’t mean they are not conscious themselves. That is, you don’t feel my consciousness either, so it’s no surprise that your left hemisphere “you” doesn’t feel your liver being conscious. I gave a talk about that for example here. I don’t have a strong new theory of consciousness out in public yet, but I do think we need to be very careful with assumptions as the above.
Might voluntary motor circuits possess a level or type of self-organizing informational complexity that naturally aligns with the emergence of conscious experience, while autonomic systems, though complex, do not cross that threshold or gain adaptive benefit from such phenomenology?
           It’s possible, but we’ve been doing analyses (attached is an early example) of non-brainy signals using the same metrics neurologists use to distinguish a pile of neurons from an aware human mind with locked-in syndrome (for example), and – as I predicted – the results are very interesting. Much more coming on this, so I am skeptical of the above distinction. Nevertheless, I do think it’s interesting that our linguistic consciousness picked 3D space (and the muscle actuators needed to move through it) as the space it’s aware of, instead of the many other spaces in which our bodies operate (physiological, anatomical, etc.). I suspect it’s an evolutionary reason that could readily have been otherwise, and we are making tools to try to give language to these other intelligences.
What do you think of Sheldrake’s hypothesis?
Sheldrake proposes a sensitization in the universe, a generic sensitization, and in a certain sense, I go wider than him because I think the universe is full of not just sensitization, but all other kinds of other cognitive capacities that keep being found in matter. I don’t make the larger claim strongly because I don’t have a way of doing experiments at that scale.
How should we understand “meaning” in biological systems?
I think it’s the same in all systems (biological or not), but biology is what we tend to call the study of bodies of systems that are good at generating their own meaning and making it easy for us (as biological systems) to see that they’ve done so. I suspect it has something to do with structuring one’s experiences, memories, etc. in a way that provides long-range order. In other words, meaning is a kind of high-order generalization, interpretation, saliency, etc. that extends far from the immediate utility or applicability of a mental structure. It’s about extending the cognitive light cone of a thought so that its relevance to time and place far beyond the current, practical context becomes established. Here’s a paper on it.
Real minds operate by reasons; machines do so by causes.
This is a very thorny area, if you assume that our brains obey physics. Basically it’s very hard to specify what those words really mean, in a useful way. Here’s an attempt. Reasons are what we call it when a system obeys high-order patterns in the Platonic space, causes are what we call it when a system obeys the low-order ones. I think it’s a continuum.
You talked about the generic problem-solving capacities of systems. My question is: how complex must these systems be? The generic problem-solving capacities of a rock and a biological system (artificial or not) are bound to be very different. Or do you think this is a basic property of any matter?
1) if we ask what would the most simple, most basic, low-end version of intelligence look like? We know it won’t look like advanced intelligence, we’re purposely looking for the lowest point in the spectrum. What is the minimal requirement? I think there are a couple of criteria, but one key one is the simplest form of goal-directedness – least action laws. Even a humble photon manages to find the least-effort path to its target (and amazingly, does it despite the fact that you have to already have traveled all the possible paths to the end, to really know which is the fastest one! Where is its machinery to do so? I think it’s on the other side – particles are interfaces to the smallest minds in the Platonic space, all they know is this one trick). I think least action laws are what intelligence looks like (and we call it physics, we’ve decided it’s not a branch of psychology but I think it is).
– so how do we recognize intelligence when it’s so minimal? To make progress, I like a simple, observable engineering property: autonomy. Specifically, for any system, how much can trust it to do without me being there to force it?  Like, if I’m building something and I have a homeostat module, I know what I can count on it to do – keep a certain variable in a certain range – I can delegate that job to it. It has non-0 autonomy so I think it’s on the spectrum. But surely the rock doesn’t do anything by itself? Actually, it has a minimal capacity. Imagine: as an engineer building a roller coaster, I know I have to force it up the mountain. But I don’t have to do anything for it to come back down! A true 0 would be something where I had to worry about *everything* it was supposed to do. But even dumb rocks and single particles obey least action principles which means, there are some things I can trust them to do. They will follow a gradient; they won’t do delayed gratification for example, but they will do the simple “taxis“.
– I asked Chris Fields once: is it possible to have a world in which matter doesn’t even know how to do least action? He said, the only way to do that is have a universe in which nothing ever happens (so that’s interesting – a universe with 1 particle in it and nothing else has 0 intelligence in it, but as soon as you have 2 things, the journey begins). But it means that in our world, there is nothing so stupid as to not be on the spectrum of persuadability, and thus I believe that in this world, intelligence of some sort is baked into all matter – there’s no truly dead matter anywhere, minimal though it may be. More accurately, it’s not baked into the matter – matter of even the simplest kind is already a tiny interface or pointer through which certain kinds of patterns can ingress from the Platonic space.
– So what about the rock. I think we call “life” those things that are good at scaling the intelligence of their parts. The rock doesn’t do anything that its particles don’t already do, so we call it dead and not part of biology. The cell does many things its parts don’t do, and so we call it life. Life, I think, is what we call systems that significantly align their parts so that the cognitive light cone of the whole is bigger (and projects into a new space) than that of its components. Richard Watson has some additional useful thoughts on what the parts are doing. But I think life is what we call systems that channel much more impressive patterns than their parts can.
2) there’s one other thing, though currently very few like this line of thinking (the mechanists think there’s nothing special in life, and the organicists think the magic of life doesn’t exist in “computers”). Even extremely small, deterministic, obvious algorithms – like bubble sort – as it turns out, are doing things not explicitly in the algorithm. They do delayed gratification and some other side-quests no one noticed in 60+ years of study (because everyone assumed they could only do what’s in the algorithm, being good machines). These extra things they do is not merely unpredictability or complexity, but patterns that would be recognizable to any behavioral scientist if they appeared in a more familiar guise. This (and other as-yet unpublished data) is telling me that
– our models of dumb machines that only do what the algorithm says are as unfit for relating to actual machines as they are for life (i.e., there may not be anything anywhere in our world that is fully encompassed by our formal models of algorithms);
– we call “life” those systems that really, really amplify these intrinsic motivations so that it’s obvious to us;Â the ones that do it only “a little bit” we sweep under the rug and call them machines.
– we can say it’s “emergent” from the algorithm but that doesn’t help anything, it totally breaks the whole point of an algorithm which was supposed to describe what the thing is going to do!
– this means that I’m not a computationalist (I don’t think following an algorithm makes you alive, conscious, etc. etc.) but I *do* think that things we call machines are on the spectrum with us, not because of their algorithms, but precisely because they don’t just obey their algorithms any more than we do. I think this ability to get more out than you put in (which I blame on the ingressions from this Platonic space) is a magic that haunts everything – biological bodies, engineered bodies, software systems, etc. Interestingly, this idea (that our connection to the ineffable in this space is not unique to us, to life, to cells, etc. but is also accessible to mundane “machines”) gets me more hate mail than my claim that cells and molecules have memory, cognition, etc. The mechanists try to be rational in shooting it down – they argue that the best stories will always be in terms of chemistry, nothing above (also, weirdly, not below either – somehow no one wants to go to modes of quantum foam, love chemistry for some reason). But the organicists and the religious folks, who like my non-reductionism otherwise, can get pretty nasty. We want to be exceptional I guess, and if it can’t be just big brains, or some brains, or cells, then at least it better be “organic material” that makes us unique. Some corner of the world needs to not be special for us to feel special. That’s my amateur psychoanalysis of what’s going on. My unpopular stance on the ineffable also being able to ingress through mere “machines” does have one up-side: it keeps at bay some communities who would otherwise claim that my views support their human-centric, non-science agenda.
– so the AI’s may have something interesting going on, but not because they talk as if they do – the talking is irrelevant because we made them talk with an algorithm. What’s much more interesting is what they might be doing that the algorithm doesn’t mention. If dumb bubble sort can do it already, I strongly suspect LLMs and other large constructs probably do a lot of things we haven’t checked for yet (as does even minimal life, and even “non-living” materials). But we don’t know, and the language use may be a red herring.
When I mention xenobots, people often respond that their form might not be truly novel, but a past, encoded form re-triggered by conditions. Is there evidence that rules this out?
ok, we can think about it in 3 layers. (and it’s not just Xenobots, also Anthrobots, and much more coming).
1) let’s say it’s true. It’s amazing that “conditions” can trigger a totally different ancient form. What are these conditions? Both kinds of bots live in their totally normal physiological media, and have no genetic edits, no synbio circuits, no drugs, no scaffolds. The only thing that was done is to take away neighboring cells. It’s quite remarkable that this alone is sufficient to trigger a past form of morphology, physiology, and behavior, and this is a new finding. Also, what does it mean “encoded form” – encoded where? We think we know what DNA encodes – proteins, and some timing information about their expression. Are we saying that every genome also “encodes” all the past forms? Is there a suggestion of how to read them out? There are some thoughts on what “encoding” might mean here, here, and here.
2) it doesn’t seem to be true. What past form of life in the human lineage was supposed to look and behave like Anthrobots, have their transcriptome (9000+ genes differently expressed than the cells they come from), have the ability to repair nearby neural wounds, etc. etc.? I’m not aware of a good candidate for this.
3) it’s not a hypothesis. “might not be” is cheap and sterile; it doesn’t lead to anything because it’s compatible with anything. The more beneficial thing would be to a) say what prior form it was, and b) offer some predictive value about *what* form (morphologically, transcriptionally, and behaviorally) would be called up by specific environments. In the absence of that, it’s not a helpful or falsifiable hypothesis. One can always say that “maybe something in the past explains it” but that gives no benefit to efforts in evolutionary biology, bioengineering, or biomedicine which benefit from predictive models of the problem-solving competencies of living material. In other words, we want to understand where new forms of behavior (in anatomical, physiological, and other spaces) come from, if it’s not selection, and how we can make better interfaces (embodiments) for the forms we want. I don’t believe it’s plausible or useful to assume there used to be Xenobots/Anthrobots that specifically were selected to be good at what we see them doing now. If we say that they learned to do what they do at the same time the genome learned to be a frog/human given selection forces, then we break the specificity expected of evolution – to explain life’s properties by a very specific history that led to it vs. to something else. I think it’s more useful to try to understand how groups of cells implement massive plasticity to find novel ways of being despite their standard hardware, and to map out and exploit the structured latent space of possibilities which genomically-encoded hardware can access as a problem-solving competency (a.k.a., intelligence).

Leave a Reply to Henry Volkmann Cancel reply