I had a conversation recently in which we talked about a few issues related to biological plasticity and evolution; the following is a transcript of that discussion and 3 specific topics that came up:
Memory: static structure or process?
First of all, we already know from the conventional model of neuroscience, that memories are not static, that there is no such thing as read-only memory – every time you try to recall a memory, you modify it. And there are no structures in your brain that remain unchanged for the 80 years or however long we’re going to be alive. So, we know already that memory is a dynamic medium that it’s constantly being written and rewritten and memories are strengthened, but also modified, by recall.
Now, less conventional is the idea that it actually goes much deeper than that. Memory is not a matter of storing something as with as high fidelity as you can. That’s the very basic most simple kind of memory, but that’s not what we have. I’ll give you a simple example. Caterpillars have a particular brain suitable for driving a soft-bodied vehicle in two-dimensional space. Caterpillars eat leaves; they also become butterflies. So they need to develop a new brain that is appropriate for a hard-bodied vehicle in three-dimensional space.
Now what has been shown is that you can train a caterpillar to eat leaves on a particular color background. And then when you test the resulting butterfly or moth, they will go to that background to look for food. So, the memory persists, even though the brain is basically completely taken apart, dissolved – most of the cells die. Now you might think that this is a question about how to keep a memory when the medium is refactored? That’s step one, but it’s not even the exciting part here. The most exciting part that people never talk about is the following. Butterflies don’t eat the same stuff that caterpillars eat. Butterflies don’t care about leaves. They drink nectar.

So it’s no good to store a memory of leaves to be found on this color disc, because the butterfly can’t use it. It is irrelevant to the butterfly. It doesn’t map onto the new body. So if you’re going to have that memory, you need to do two things. You need to generalize it from leaves to a category called food. So that’s a kind of intelligence, generalizing from particulars to categories is intelligence. And you need to remap the memory onto a new architecture. The other thing the butterflies will have to do is actually execute relevant behavior. So, that means that that information has allowed linked to muscles that flap wings. Whereas, before, that information was being used to activate crawling.
I think what is really happening with memory is that it has to be not just stored, but it has to be generalized and imprinted onto a new, onto a potentially greatly changing substrate. A dynamic, living agent cannot just keep things the same. The memories don’t make any sense in a new context. In the context of a butterfly, what good are your memories of where to find leaves? This is a tractable example of the deep lessons (not details) of one life carrying forward into the next. While the body changes radically between lifetimes, the information – the lessons learned – persists and moves forward, albeit in a transformed way.
Context is critical for biological memories because evolution knows that memories are going to be reinterpreted by a future you that is not the same as past you. Your future brain might have undergone puberty and remodeling with the hormones. It might have aged. It might have learned all kinds of stuff that makes the past knowledge seen in a new way. Memories are living and they are constantly adapted. I wonder how much of that capacity is in the cleverness of the host’s mechanisms, or is it in some sort of basal competency of memories to adapt and survive and maintain themselves in whatever medium they can? As William James said, “thoughts are the thinkers”. Maybe it’s a collaboration of both – the drive of memories to persist and the agency of the plastic cognitive apparatus that helps them adjust to a new environment.
There is also the idea that we as beings at any point in time, don’t have access to the past. What we have access to is the engrams, the memory traces that the past has left in our brain or body. We don’t have direct access to what actually happened. So what that means is that at any given moment, you and I and all cognitive beings are a collection of temporal slices, with a little bit of thickness, maybe a couple of hundred milliseconds or similar. We have to reconstruct in real time, a story of who we are, what we are, what our past history is. It is a real-time process. See Nick Chater’s book “The Mind is Flat.
Another way to think about memory is as communication between temporal slices – our Selflets. So your memory is a message left for you from your past self. Now, that sounds kind of crazy until you think about people with brain damage that cannot form new memories. What do they do? They could leave themselves notes on a pad of paper that says, “You just woke up, here’s what you need to know. You’ve got brain damage and this is what’s going on…”. And the last thing on that sheet is, “and by the way, before you go to bed, write another note”. The rest of us do exactly the same thing, but we internalize it inside our skulls. They just export it to an outside medium. We don’t need the pad. We have the machinery to do that, but it basically the same process of message-passing to the future, of communication.
This is what all collective intelligence is doing. We, of course, are our collective intelligence made of cells. What does the collective intelligence of ants and termites do to hold their colony-level thoughts together? They leave it in chemicals, their scratch pad is the sand that they’re crawling around on. They’ve got chemical messages. We all we all use some kind of substrate to keep track of what we’re doing as a collective intelligence evolving forward through time.
Our technology is only beginning to work like this. Eventually you’ll be able to take JPGs of toast with a camera and send them to your toaster, which will figure out the recipe and act on it. Right now our technological information is so tied to one particular interpreter – tied to a rigid format and context; syntax over salience. We can barely move data between two different types of computers, never mind show a bagel photo to a bicycle that will then know how to get to a bakery. But life is that way from the beginning, because senders and receivers (within, and across organisms) are changing all the time.
The reason for the large-scale functional brittleness in our technology is because we are spoiled by the low-level reliability of our hardware. Your computer is never going to turn into a toaster, and engineers know that. And that’s why the people who design our current tech don’t need to worry about that kind of change and making sure that information re-maps to stay relevant. But biology deals with unreliable hardware from day one. As a living organism lineage, evolution knows for a fact that your material is going to break. You don’t know how many cells you’re going to have, or what genes, etc. We have to have mechanisms to adapt information into a salient set of behaviors despite novelty in environment and in our own parts. This got wired in for morphogenetic information during embryogenesis, and eventually I think expanded to behavioral information as evolution pivoted developmental bioelectrics, which deal with spatial behavior in anatomical morphospace, into neural bioelectrics that deal with temporal behavior in 3D space.
And that’s why life is incredibly interoperable. It has to get along with whatever it happens to have, which is discovered on the fly as beings come into the world – “play the hand you’re dealt” is what the software of life is good at, because of the history of unstable environment and unstable parts (genetic change). Life doesn’t overtrain on the priors of evolution. That’s why all of your information, both morphological and behavioral, is remappable because the architecture never made the assumption that the hardware was going to stay constant.
Goals
First of all, I don’t believe that having goals as a binary. I like Wiener, Rosenbluth, and Bigelow’s scale developed in the 1940s, which is a continuum based on cybernetics that goes from passive matter, all the way up to human metacognition and some wave points in between. “What kind and how much” is a better question than yes, you have them or no, you don’t have them.

I have a different version of it that I’ve been pushing called the spectrum of persuadability,

which is really much more continuous and talks about how good are you at pursuing different sizes of goals. So the cognitive light cone is the size of the biggest goal you could possibly pursue. And in all of those cases, I think that I think life is a subset of cognitive beings. And the things we call alive are things that are good at scaling their goals and pushing them into new problem spaces.
To be very specific, an individual cell has goals – very tiny goals in physiological and metabolic space. The only thing that a single cell cares about is what its level of the fuel is, its physiological status etc. – basically all of its goals are the size of a single cell, with short memories and small abilities of anticipation into the future. Single cells have a very tiny light cognitive light cone that operates within physiological space and metabolic space and maybe some others.

When cells get together and make an embryo, they have huge goals. They’re trying to build livers and kidneys and eyes etc. You know those are goals because if you try to deviate them from it, they’ll try to they’ll fight back. Goal-directed activity is not just emergence of complexity. A goal is something that is revealed to an observer when it perturbs the system and it fights (with various degrees of competency) to still get to the goal state.
Morphogenesis in general absolutely does that. And that that’s why it has goals, not because it’s complex, but specifically because it has it has that capacity to to achieve despite perturbations. Once you once you have groups of cells, if the cognitive glue mechanisms are working correctly, you now have large-scale goals in anatomical space. And if you go beyond that, you end up with a with an organism with a brain and nervous system, and then it develops goals in three dimensional space, because that’s and we recognize that as behavior – those are the kind of goals that we know how to recognize; the other goals are hard for us to see.
Then eventually you end up with social goals and linguistic goals and who knows what other kinds. If I test a creature for the sizes of its goals, I can also experimentally ask: what stresses it out? For example: if all those states are within some short period of time back and forward, and on the scale of meters, it might be a dog. A dog is never going to care about what happens three months from now, two towns over. But if you’re a being working towards world peace and the stability of financial markets over the next two centuries, you’re at least a human because your goals are bigger than even your lifespan. By the way, that’s a unique human trait – to have goals that are bigger than your lifespan. If you’re a goldfish, all of your goals are achievable in your lifetime because your goals are on the scale of minutes and you’re probably going to live that long – all of your goals are achievable. If you’re a human, probably many of your goals are not achievable and that’s a unique human psychological pressure. And if you can literally, practically, care about every sentient being on this planet and be actively working towards their well-being – you’re some sort of Bodhisattva because v1.0 humans cannot care about that many individuals at once, in the linear range. After a certain (small) number, it just feels like “many” whether it’s 1000 people or 50,000 people. Understanding a system’s goals and their magnitude can indicate the type of intelligence you’re dealing with.
Universal hacking
In biology, and possibly outside of it too, everything is trying to manipulate everything else. By hacking, I don’t mean just negative exploitation, although that’s part of it, but using your understanding of a system to control it. The thing about hacking as a metaphor is that it implies using the system in a way it wasn’t intended to be used. In biology, you have to form your own perspective. No one tells you where the control knobs are or what was expected. You come into the world, you are confronted with your own parts, you’re confronted with neighbors, with parasites, conspecifics, predators, and prey. You need to figure out how all that works, well enough to survive. That means you’re going to hack it. You’re going to do everything you can to get things to go your way by sending signals, by actuating whatever parts you have. You also have to get your own components to do what they need to do. And all of it is hacking because there is no correct way to use the system. It’s only what the agent, as an observer, can figure out by experiment and modeling.
Every agent has some perspective on the world. From that perspective, they try to figure out where the control knobs are and build an internal model of the space so that they understand, “I want to go towards where life is good. And in order to do that, here are the things I can tweak, effector steps I can take.” There was a really cool paper called “The Child as a Hacker” – the idea that when children come into the world, they don’t know what the right way is to do anything. They have to figure it out. They build internal models of how to do things and they will subvert intended modes of interaction creatively. They can, because they don’t have any allegiance to your categories of how things are meant to be used. They have to build their own categories and interaction protocols, which may or may not match with how the other minds in the environment intended these things to be manipulated. And all successful agents are like that. Being an agent means you have to have your own point of view, from which you develop a version of how to cut up the world into functional pieces.
Context is subjective, it is a best guess. It is a set of affordances where you as an agent look around, you say, “I can sit on this and I can eat this other thing and I can hide under this other thing. And I can have a deep conversation with this other thing. And this thing right here, I’m not going to have a deep conversation with it, but I can train it and make it do some stuff for me.” You as the observer are going to decide what your context is and how you’re going to see it. And if you’re good at it, you’ll have a very great adaptive life. And if you’re not good at it, then you’ll leave a lot on the table.
Collective intelligence – from biology to human teams and societies
There are a few things that biology does to result in collectives and then the scaling of goals. The thing is that people often hear those and they think “oh that’s great, let’s do what biology does in the human arena will just implement those techniques”. I don’t think it’s necessarily the way to go because what’s going to happen is biology doesn’t necessarily optimize for the same values that we (should) optimize for.
When you, as a giant collective of cells, go spend a day boxing, you will come home and say “this was great – I achieved a bunch of social goals, I achieved some personal development goals, excellent”. Nobody asked your cells and tissues whether they wanted to be killed by mechanical damage and then cleared out by the immune system in bruises. That is how collectives work. You gain capabilities at the collective level but no one may be watching for the welfare of the individual parts. We as humans, who have a huge degree of agency in the individual, may or may not want to adopt some of these policies. Certainly in the political arena that’s been tried a number of times in history, and it always works the same way – disaster. So, I think that we need to be looking for optimal policies for scaling collective intelligence, but not necessarily copying what biology does because I don’t think biology is tracking all of the values that we should hold sacred.
Images by Jeremy Guay of Peregrine Creative.

Leave a Reply to Forms of life, forms of mind | Dr. Michael Levin | Resources for planarian memory experiments Cancel reply