Some thoughts on memory, goals, and universal hacking

Published by

on

I had a conversation recently in which we talked about a few issues related to biological plasticity and evolution; the following is a transcript of that discussion and 3 specific topics that came up:

Memory: static structure or process?

First of all, we already know from the conventional model of neuroscience, that memories are not static, that there is no such thing as read-only memory – every time you try to recall a memory, you modify it. And there are no structures in your brain that remain unchanged for the 80 years or however long we’re going to be alive. So, we know already that memory is a dynamic medium that it’s constantly being written and rewritten and memories are strengthened, but also modified, by recall.

Now, less conventional is the idea that it actually goes much deeper than that. Memory is not a matter of storing something as with as high fidelity as you can. That’s the very basic most simple kind of memory, but that’s not what we have. I’ll give you a simple example. Caterpillars have a particular brain suitable for driving a soft-bodied vehicle in two-dimensional space. Caterpillars eat leaves; they also become butterflies. So they need to develop a new brain that is appropriate for a hard-bodied vehicle in three-dimensional space.

Now what has been shown is that you can train a caterpillar to eat leaves on a particular color background. And then when you test the resulting butterfly or moth, they will go to that background to look for food. So, the memory persists, even though the brain is basically completely taken apart, dissolved – most of the cells die. Now you might think that this is a question about how to keep a memory when the medium is refactored? That’s step one, but it’s not even the exciting part here. The most exciting part that people never talk about is the following. Butterflies don’t eat the same stuff that caterpillars eat. Butterflies don’t care about leaves. They drink nectar.

So it’s no good to store a memory of leaves to be found on this color disc, because the butterfly can’t use it. It is irrelevant to the butterfly. It doesn’t map onto the new body. So if you’re going to have that memory, you need to do two things. You need to generalize it from leaves to a category called food. So that’s a kind of intelligence, generalizing from particulars to categories is intelligence. And you need to remap the memory onto a new architecture.  The other thing the butterflies will have to do is actually execute relevant behavior. So, that means that that information has allowed linked to muscles that flap wings. Whereas, before, that information was being used to activate crawling.

I think what is really happening with memory is that it has to be not just stored, but it has to be generalized and imprinted onto a new, onto a potentially greatly changing substrate. A dynamic, living agent cannot just keep things the same. The memories don’t make any sense in a new context. In the context of a butterfly, what good are your memories of where to find leaves? This is a tractable example of the deep lessons (not details) of one life carrying forward into the next. While the body changes radically between lifetimes, the information – the lessons learned – persists and moves forward, albeit in a transformed way.

Context is critical for biological memories because evolution knows that memories are going to be reinterpreted by a future you that is not the same as past you. Your future brain might have undergone puberty and remodeling with the hormones. It might have aged. It might have learned all kinds of stuff that makes the past knowledge seen in a new way. Memories are living and they are constantly adapted. I wonder how much of that capacity is in the cleverness of the host’s mechanisms, or is it in some sort of basal competency of memories to adapt and survive and maintain themselves in whatever medium they can? As William James said, “thoughts are the thinkers”. Maybe it’s a collaboration of both – the drive of memories to persist and the agency of the plastic cognitive apparatus that helps them adjust to a new environment.

There is also the idea that we as beings at any point in time, don’t have access to the past. What we have access to is the engrams, the memory traces that the past has left in our brain or body. We don’t have direct access to what actually happened. So what that means is that at any given moment, you and I and all cognitive beings are a collection of temporal slices, with a little bit of thickness, maybe a couple of hundred milliseconds or similar. We have to reconstruct in real time, a story of who we are, what we are, what our past history is. It is a real-time process. See Nick Chater’s book “The Mind is Flat.

Another way to think about memory is as communication between temporal slices – our Selflets. So your memory is a message left for you from your past self. Now, that sounds kind of crazy until you think about people with brain damage that cannot form new memories. What do they do? They could leave themselves notes on a pad of paper that says, “You just woke up, here’s what you need to know. You’ve got brain damage and this is what’s going on…”. And the last thing on that sheet is, “and by the way, before you go to bed, write another note”. The rest of us do exactly the same thing, but we internalize it inside our skulls. They just export it to an outside medium. We don’t need the pad. We have the machinery to do that, but it basically the same process of message-passing to the future, of communication.

This is what all collective intelligence is doing. We, of course, are our collective intelligence made of cells. What does the collective intelligence of ants and termites do to hold their colony-level thoughts together? They leave it in chemicals, their scratch pad is the sand that they’re crawling around on. They’ve got chemical messages. We all we all use some kind of substrate to keep track of what we’re doing as a collective intelligence evolving forward through time.

Our technology is only beginning to work like this. Eventually you’ll be able to take JPGs of toast with a camera and send them to your toaster, which will figure out the recipe and act on it. Right now our technological information is so tied to one particular interpreter – tied to a rigid format and context; syntax over salience. We can barely move data between two different types of computers, never mind show a bagel photo to a bicycle that will then know how to get to a bakery. But life is that way from the beginning, because senders and receivers (within, and across organisms) are changing all the time.

The reason for the large-scale functional brittleness in our technology is because we are spoiled by the low-level reliability of our hardware. Your computer is never going to turn into a toaster, and engineers know that. And that’s why the people who design our current tech don’t need to worry about that kind of change and making sure that information re-maps to stay relevant. But biology deals with unreliable hardware from day one. As a living organism lineage, evolution knows for a fact that your material is going to break. You don’t know how many cells you’re going to have, or what genes, etc. We have to have mechanisms to adapt information into a salient set of behaviors despite novelty in environment and in our own parts. This got wired in for morphogenetic information during embryogenesis, and eventually I think expanded to behavioral information as evolution pivoted developmental bioelectrics, which deal with spatial behavior in anatomical morphospace, into neural bioelectrics that deal with temporal behavior in 3D space.

And that’s why life is incredibly interoperable. It has to get along with whatever it happens to have, which is discovered on the fly as beings come into the world – “play the hand you’re dealt” is what the software of life is good at, because of the history of unstable environment and unstable parts (genetic change).  Life doesn’t overtrain on the priors of evolution. That’s why all of your information, both morphological and behavioral, is remappable because the architecture never made the assumption that the hardware was going to stay constant.

Goals

First of all, I don’t believe that having goals as a binary. I like Wiener, Rosenbluth, and Bigelow’s scale developed in the 1940s, which is a continuum based on cybernetics that goes from passive matter, all the way up to human metacognition and some wave points in between. “What kind and how much” is a better question than yes, you have them or no, you don’t have them.

I have a different version of it that I’ve been pushing called the spectrum of persuadability,

which is really much more continuous and talks about how good are you at pursuing different sizes of goals. So the cognitive light cone is the size of the biggest goal you could possibly pursue. And in all of those cases, I think that I think life is a subset of cognitive beings. And the things we call alive are things that are good at scaling their goals and pushing them into new problem spaces.

To be very specific, an individual cell has goals – very tiny goals in physiological and metabolic space. The only thing that a single cell cares about is what its level of the fuel is, its physiological status etc. – basically all of its goals are the size of a single cell, with short memories and small abilities of anticipation into the future. Single cells have a very tiny light cognitive light cone that operates within physiological space and metabolic space and maybe some others.

When cells get together and make an embryo, they have huge goals. They’re trying to build livers and kidneys and eyes etc.  You know those are goals because if you try to deviate them from it, they’ll try to they’ll fight back. Goal-directed activity is not just emergence of complexity. A goal is something that is revealed to an observer when it perturbs the system and it fights (with various degrees of competency) to still get to the goal state.

            Morphogenesis in general absolutely does that. And that that’s why it has goals, not because it’s complex, but specifically because it has it has that capacity to to achieve despite perturbations. Once you once you have groups of cells, if the cognitive glue mechanisms are working correctly, you now have large-scale goals in anatomical space. And if you go beyond that, you end up with a with an organism with a brain and nervous system, and then it develops goals in three dimensional space, because that’s and we recognize that as behavior – those are the kind of goals that we know how to recognize; the other goals are hard for us to see.

Then eventually you end up with social goals and linguistic goals and who knows what other kinds. If I test a creature for the sizes of its goals, I can also experimentally ask: what stresses it out? For example: if all those states are within some short period of time back and forward, and on the scale of meters, it might be a dog. A dog is never going to care about what happens three months from now, two towns over.  But if you’re a being working towards world peace and the stability of financial markets over the next two centuries, you’re at least a human because your goals are bigger than even your lifespan. By the way, that’s a unique human trait – to have goals that are bigger than your lifespan. If you’re a goldfish, all of your goals are achievable in your lifetime because your goals are on the scale of minutes and you’re probably going to live that long – all of your goals are achievable. If you’re a human, probably many of your goals are not achievable and that’s a unique human psychological pressure. And if you can literally, practically, care about every sentient being on this planet and be actively working towards their well-being – you’re some sort of Bodhisattva because v1.0 humans cannot care about that many individuals at once, in the linear range. After a certain (small) number, it just feels like “many” whether it’s 1000 people or 50,000 people. Understanding a system’s goals and their magnitude can indicate the type of intelligence you’re dealing with.

Universal hacking

In biology, and possibly outside of it too, everything is trying to manipulate everything else. By hacking, I don’t mean just negative exploitation, although that’s part of it, but using your understanding of a system to control it. The thing about hacking as a metaphor is that it implies using the system in a way it wasn’t intended to be used. In biology, you have to form your own perspective. No one tells you where the control knobs are or what was expected. You come into the world, you are confronted with your own parts, you’re confronted with neighbors, with parasites, conspecifics, predators, and prey. You need to figure out how all that works, well enough to survive. That means you’re going to hack it. You’re going to do everything you can to get things to go your way by sending signals, by actuating whatever parts you have. You also have to get your own components to do what they need to do. And all of it is hacking because there is no correct way to use the system. It’s only what the agent, as an observer, can figure out by experiment and modeling.

Every agent has some perspective on the world. From that perspective, they try to figure out where the control knobs are and build an internal model of the space so that they understand, “I want to go towards where life is good. And in order to do that, here are the things I can tweak, effector steps I can take.” There was a really cool paper called “The Child as a Hacker” – the idea that when children come into the world, they don’t know what the right way is to do anything. They have to figure it out. They build internal models of how to do things and they will subvert intended modes of interaction creatively. They can, because they don’t have any allegiance to your categories of how things are meant to be used. They have to build their own categories and interaction protocols, which may or may not match with how the other minds in the environment intended these things to be manipulated. And all successful agents are like that. Being an agent means you have to have your own point of view, from which you develop a version of how to cut up the world into functional pieces.

Context is subjective, it is a best guess. It is a set of affordances where you as an agent look around, you say, “I can sit on this and I can eat this other thing and I can hide under this other thing. And I can have a deep conversation with this other thing. And this thing right here, I’m not going to have a deep conversation with it, but I can train it and make it do some stuff for me.” You as the observer are going to decide what your context is and how you’re going to see it. And if you’re good at it, you’ll have a very great adaptive life. And if you’re not good at it, then you’ll leave a lot on the table.

Collective intelligence – from biology to human teams and societies

There are a few things that biology does to result in collectives and then the scaling of goals. The thing is that people often hear those and they think “oh that’s great, let’s do what biology does in the human arena will just implement those techniques”. I don’t think it’s necessarily the way to go because what’s going to happen is biology doesn’t necessarily optimize for the same values that we (should) optimize for.

When you, as a giant collective of cells, go spend a day boxing, you will come home and say “this was great – I achieved a bunch of social goals, I achieved some personal development goals, excellent”.  Nobody asked your cells and tissues whether they wanted to be killed by mechanical damage and then cleared out by the immune system in bruises.  That is how collectives work. You gain capabilities at the collective level but no one may be watching for the welfare of the individual parts. We as humans, who have a huge degree of agency in the individual, may or may not want to adopt some of these policies. Certainly in the political arena that’s been tried a number of times in history, and it always works the same way – disaster. So, I think that we need to be looking for optimal policies for scaling collective intelligence, but not necessarily copying what biology does because I don’t think biology is tracking all of the values that we should hold sacred.


Images by Jeremy Guay of Peregrine Creative.

38 responses to “Some thoughts on memory, goals, and universal hacking”

  1. Ben Moskowitz Avatar
    Ben Moskowitz

    These are some brilliant thoughts! Relating to memory, current AI systems seem to be trying to remember everything rather than do what you’re saying here and remap memories to new contexts. For the task of building AGI, does having a form of memory that can map to new hardware/contexts seem like a necessary component?
    Additionally, new models such as Gemini 1.5 seem to be able to remember the content in its context window to near entirety, a much greater recall than humans. Do you think that’s an advantage towards greater intelligence or that what humans do of recalling the general contours is advantageous since it allows for simpler remapping?

    1. Zach C Avatar
      Zach C

      My suspicion is that the answer is both, guided by a heuristic. Energy and economics in open, not closed systems, is about different levels of energy creating unique behaviors. And once you have competent things exploring their different energy spaces, it becomes obvious that cooperation is not coordination at all, but heuristics that are easy to reach and guide relationships.

      Energy and the environment is not knowable despite what death cultists may say. I suspect there is no global general intelligence. There is a contextual fitting to multiple different kinds of spaces with different energy levels. And everyone gets the freedom to pick their cheap heuristics.

      A simpler more robust way to understand the rush towards AGI is the economics of materials, human visible energy, and delusion.

      1. April Jablonski Avatar
        April Jablonski

        Your comment resonated with me . Thanks for sharing

    2. Mike Levin Avatar
      Mike Levin

      I think the advantage living systems have (for now) is that they evolved with *unreliable* hardware. This forces selection for algorithms that can do the remapping and generalization – intelligence, I think, is an unavoidable side effect of robustness to damage (molecular and up). Our machines, so far, have been made of very reliable hardware – that’s a straightjacket which is hard to escape from. Maybe possible, but we don’t know. As McGilchrist and others point out, constraint is very powerful for unleashing creativity and deep product. Lack of constraint allows shallow, trivial solutions that are brittle.

      1. Vicente Sanchez-Leighton Avatar
        Vicente Sanchez-Leighton

        You write “Our machines, so far, have been made of very reliable hardware”, well not everyone agrees 😉 https://static.ias.edu/pitp/archive/2012files/Probabilistic_Logics.pdf. I would side with Turing that introduces a nuance in what you say about the link of intelligence and robustness to damage: “In other words, then, if a machine is expected to be infallible, it cannot also be intelligent.” (lecture to the London Mathematical Society on February 1947).

        1. Mike Levin Avatar
          Mike Levin

          This is great, thanks!

  2. Tony Budding Avatar
    Tony Budding

    What a great description of how life really works. Inherent in life is self-interest in the form of perpetuation. The first order self-perpetuation is survival, and then as you describe, we evolve into higher order goals of a huge variety. Some of these goals can include others, which requires us to identify with those others. Some version of “I am part of this whole, so acting in a way that benefits the whole also benefits me.” When we say someone is a good person, it’s because they include others in their identity. When we say someone is a bad person, they put their own interests above everything else.

    There is one sentence in the article, though, that I think isn’t right. It’s a small detail, but with enormous implications. The sentence is, “The thing about hacking as a metaphor is that it implies using the system in a way it wasn’t intended to be used.” Instead, I think a better framing is, “Hacking implies using a the system in a way that isn’t currently employed.” All intended uses are as modular as all other forms of intelligence. There is no objective or absolute intent. It’s always perspectival.

    Mike, this blog is fantastic. You don’t need to share all this, but we all benefit from it. Thank you!

    1. Turil Cronburg Avatar

      The interesting thing here when it comes to the idea that there’s such a thing as a “good person” who includes others in their problem solving functions and a “bad person” who is entirely focused on their own self’s problems/needs is that a healthy system requires ALL of the different types/levels of functionality to work. At all times, we need the simple “self-focused” systems like bicycles and viruses just as much as we need wizened old philosophers and mature scientists and young artists and novice product testers too.

      This freedom to diversify/specialize is how evolution creates emergent systems that increase the ability of individuals to collaborate, even when those individuals have no idea about anyone else’s needs or goals. Those cells in the bruise that was created through my body’s playful and muscle-building movement are “happy” to die doing their work, because that’s how they were made/programmed, by evolution/entropy/randomness. While it might look violent and cruel, it’s usually fully consensual and voluntary. Which is how this can scale up to a planetary organism. Just let everyone do what they really want, and support them in getting what they need to do their work well, and the whole chaotic system of nature will scale up to the Earth herself.

      1. Tony Budding Avatar
        Tony Budding

        Turil, yes for sure healthy systems include a wide variety of functions. And most (if not all) living creatures are capable of different functions at different stages of modularity. Even individual cells can function for their own survival or to benefit the whole (and these two can certainly be aligned). As you wrote, some of your cells are “happy” to die for the sake of the whole. Indeed, complex creatures could never exist without this holistic function included in their potential activities at each stage of modularity.

        In terms of good and bad people, and this is a subtle distinction, but my statement was about when “we say” someone is a good person or a bad person, which people do all the time. This is very different from bifurcating all people into one of two rigid categories. What’s implied in the general distinction, though, is important.

        Let’s consider the quest for scientific progress. When a researcher includes in their identity the scientific community (or the general public) as a whole, they’re more likely to operate with integrity and less likely to falsify or misrepresent findings in order to further their own career. Obviously, this is an oversimplification, and there is a full continuum of considerations that varies across functions and stages of modularity.

        Questions of morality accompany almost all developments in science and technology, with AI being one of the hottest topics these days. Morality itself is fraught with issues, as the terms are usually dictated by those in power who hope to keep that power. My preference is to stick to cause and effect trends as much as possible. Those who include others in their identity are more likely to act with integrity and more likely to benefit the whole system than those who only consider the benefit to themselves regardless of the impact on others. And this is true on a micro and macro scale.

        I do think these conversations are important, not just in science but in society. They’re certainly tricky conversations to have, and it’s easy for people to jump to conclusions prematurely or unfairly. What I like most about this one factor is its utility. At any point, a person can stop, look around, and consider the impact of their actions on others. It’s hardly a salve or cure-all, but it’s easy to see how this increases the likelihood of greater benefit. And isn’t that the goal of scientific knowledge?

        1. Turil Cronburg Avatar

          This is why I say that we need all the different specialized roles. We need the “higher” levels of motivation (as shown in Maslow’s hierarchy with self-transcendent levels, for example), but we also need the “lower” levels, as in the bicycles, genes, viruses, children, teenagers, and everyone who cares deeply about things the scientists, artists, and philosophers find boring, like plumbing, taste testing wine, documenting who’s at what parties, and what sort of color to paint a wall if the couch is green. The “selfish” cells that are “happy” to die during muscle development aren’t caring about the whole organism, they’re just doing what they want to do, without caring about anyone else. The fact that they serve the larger system with no intention to do so is simply the natural diversity of evolution.

          As for science it’s not the overall goal of a planetary organism. It’s just one part out of many. All the lower and higher goals are necessary as well. But if your goal is indeed to create novel models of reality, then, yes, you absolutely need to look around at what other systems are doing, outside of your own.

          1. Tony Budding Avatar
            Tony Budding

            Turil, it seems like we are mostly in agreement. Of course we need all the different specialized roles, and of course, science isn’t the overall goal of a planetary organism. But I would like to push on one concept because it’s directly relevant to this blog.

            You wrote, “The “selfish” cells that are “happy” to die during muscle development aren’t caring about the whole organism, they’re just doing what they want to do, without caring about anyone else.” Accounting for the limitations of language, I don’t think this is right. Caring is a form of intent, and intent is modular. We have to be careful with modular phenomena because the various stages of aggregation change the behaviors.

            Mike has posted links to research that demonstrates these two modes (self-oriented and group-oriented) at the cellular level (sorry I don’t have the links handy). Terms like “happy,” “selfish” and “caring” seem more appropriate for advanced stages of modularity. Metacognition is required for considering alternative courses of action, which individual cells don’t have. I would say, though, that cells that die in service of the organism are acting in their group orientation, not their self orientation. And since both modes are natural and inherent, they can’t be “wrong.”

            On your website, you sign off with “Namaste.” Are you familiar with the Sāṁkhya concept of the three guṇas? If so, they do offer quite a bit of insight into how intent operates at all different stages of modularity. I have a lot written about this if you’re interested (tony dot budding at gmail).

            1. Turil Cronburg Avatar

              I put those emotional seeming words into quotes for a reason. 🙂 It’s the same process as what the higher consciousnesses do, just at a more basic level. All life cares (has reactions to how their actions turn out), or it wouldn’t ever be able to adapt to new environments/situations. And all of those reactions are either positive or negative (or neutral), meaning they are happy or sad or content in the basic meaning of these terms. And the level of awareness/intent that a system has can be self-focused entirely (selfish) or extend out into others (as in mammals and birds) to some extent, with increasing levels of awareness (imagined individuals who do exist elsewhere, or even impossible individuals who never exist) in primates. Yes, the dimensions of awareness/intent do change, but the basic concepts exist at all levels of life, from cell to planet.

              It’s true that there are more collaborative cells in a multicellular organism, but I would say they are collaborative because of their “selfish” goals, not some awareness/care/motivation to serve other’s goals. My heart cells don’t pump because they want me to live. They pump because that’s what their design (genes) and local environment (other heart cells) influence them to do. And they’d “happily” die if that was what was what they were influenced to do, not out of some sort of intentional sacrifice for, or even awareness of, the whole, but simply because it’s what it was programmed to do. The difference between the collaborative cells and the non-collaborative ones is simply one of random design (for collaboration) + local conditions.

              As for the gunas, I probably looked into them a decade ago or more, when I was doing more research on traditional categories/patterns. Though looking at Wikipedia’s entry about them, they don’t seem super familiar (nor particularly relevant to my work). I tend to work with Pascal’s triangle, and the binary growth process (waves of periodicity on multiple dimensions).

              Oh, and thanks for checking out my old blog!

    2. Mike Levin Avatar
      Mike Levin

      > There is no objective or absolute intent. It’s always perspectival.

      yes I agree; it’s just that in the currently common sense of “hacking”, there was an engineer(s) who made the system to be used in a specific way – they had intent, so I was playing off of that definition/usage. But of course the viewpoint of other agents is equally valid and they will find new ways to use it that are not intrinsically less “correct” than the original engineer’s. All that matters is how well a given strategy works out for the observer.

  3. Lio Hong Avatar
    Lio Hong

    Ever since I read your paper touching on Buddhism, I’ve been curious how you encountered the concept of a bodhisattva, and similarly a buddha.

    It was said that Siddharta Gautama recollected hundreds of his past lives during his enlightenment. So in TAME terms, he was expanding his computational light cone, until it was large enough to encompass a sizeable portion of the human world. There is a possible link to buddha-spheres, which involves influencing a great number of other beings for better moral standing, but it’s quite a specific term.

  4. Turil Cronburg Avatar

    These are how I see your levels of generation of effective action (“persuadability”) maybe fitting into the work I’ve done on developmental stages and the kinds of functions and questions each work with:

    0. Hardware modification – biological problem solving – 0th person perspective taking – biological – my genes – How many/much?
    1. Data encoding setpoint for goal – physical problem solving – 1st person perspective taking – me – What doing?
    2. Rewards/punishments – emotional problem solving – 2nd person perspective taking – you – Where?
    3. Communicating cogent reasoning – intellectual problem solving – 3rd person perspective taking – our community/ecosystem – How?
    4. (You don’t list) – philosophical problem solving – 4th person perspective taking – the laws of physics – When?

    These are separate, additive dimensions. In other words, these “stack”. To get to caring about rewards and punishments (second person goals: serving my friend’s needs), I need to first be successful in having a functional body (zeroth person goals), and being able to vary local conditions based on my goals (first person goals).

    Also, we could call that very first, 0th stage of biological/genes level something more akin to whatever we call a transistor and its permanent programming/code/design, so that we don’t limit this type of development to just protein-based life forms.

  5. Fernando Garcia Avatar

    Great read!

    You summarized and connected many different ideas into a single post. I had seen your light cone idea before but I hadn’t really grasped its meaning until now. It also helped me tie my own ideas and understanding about cellular automata and computing.

    Thank you!

  6. Nicholas Avatar
    Nicholas

    A wondrous & enjoyable read, and always very helpful, so thank you! Much informatic & soulfully humorous stimuli for self-referential processing and has offered me a little extra context upon my prior shared bootstrap paradox ramble!

    Intriguingly, and somewhat comically, I read down to the cognitive cone diagram, I found myself pausing as I stood, playfully repeating the one memory engram that randomly popped into my head from the film ‘Bedknobs & Broomsticks’ & repeated out loud; “what’s that got to do with my knob”. It came with vibrantly warm biophysical resonance of childhood memory – (if that makes sense); as this quote was a running-joke between me & my second eldest brother at any given opportunity! We found it hilarious; Mother was perhaps not so impressed given the tone of the joke but that just made is more hilarious; because we knew the tone didn’t mater and she couldn’t help but laugh simply because she enjoyed lovingly observing her babies having fun & laughing!

    A cool & befitting YT clip of the scene;
    https://www.youtube.com/watch?v=F5hXY7bmAJg

    Peace & Love Always

    NiCo

  7. David Bloomin Avatar
    David Bloomin

    I’m confused about instrumental goals vs terminal goals, since they seem a matter of perspective. Let’s use the goldfish example:

    “””
    If you’re a goldfish, all of your goals are achievable in your lifetime because your goals are on the scale of minutes and you’re probably going to live that long – all of your goals are achievable.
    “””

    The goldfish might have opinions about where to lay eggs, which could be a function of its current environment. However, those opinions were tuned by evolution to maximize the likelihood of its genes propagating, possibly many generations into the future.

    We could imagine perturbing the environment so that goldfish-behavior-1 results in the same number of children, but more grandchildren than goldfish-behavior-2. We could then view the goldfish choosing behavior 1 as trying to accomplish a far away goal. It’s true that the goldfish has to make choices based on its current observation of the environment, so we would always be able to tie it back to some instrumental goal like: “goldfish wants low acidity spot to lay its eggs”. But the instrumental goal is in service to a larger goal.

    It seems like competence is not really separable from goal definition. If the goldfish was able to deduce future conditions from current observations, it would be maximizing future goals.

  8. Benjamin L Avatar
    Benjamin L

    On hacking: I’ve been thinking about an economic analogy here which may prove a useful source of intuition for biologists. In economics, the way agents hack each other is with money. You give someone money, and they may suddenly start doing something totally strange and contrary to their previous nature, like sitting in an office for eight hours a day, or venturing into a dangerous mine to gather materials for someone else.

    If money were a chemical, and if we saw humans the way we see cells, we might think of money as having some property that in some way takes over humans or possesses them in some form, or otherwise overrides or interferes with their free will. Scientists might find neural patterns in how humans react to money and conclude that these patterns are the pathways by which money controls people. Experiments would be conducted to find exactly what shade of green is necessary for money to have its manipulative powers. And so on.

    Of course, humans are not taken over by money. Instead, money only works the way it does because humans choose to accept money in exchange for doing a task they otherwise wouldn’t do. I’d guess that at least some biological hacking works on similar principles, where the hack works by offering the subject something it wants that it isn’t normally able to get, in such a way that it starts behaving very differently from how it used to.

    > So, I think that we need to be looking for optimal policies for scaling collective intelligence, but not necessarily copying what biology does because I don’t think biology is tracking all of the values that we should hold sacred.

    Economics has the policies and principles you’re looking for (price system), in particular with respect to tracking all relevant values (externality). Happy to tell you about it in some form, as this topic is super important for human well-being.

    1. Turil Cronburg Avatar

      I would say, of course humans are absolutely taken over by money, as money is the viral meme that invades cultures, allowing some authoritarian set of rituals/rules/laws to control the collective, via its individuals. At this point, almost no humans would believe you if you told them that using a competitive point scoring game as a way to relate to one another was a choice. The religions of money, grades, votes, “likes” and so on are so indoctrinated into our culture from early childhood that it’s simply the way life is for most humans, controlling a large part of most every individual’s day as they see all other humans as enemies/competitors at least at some level.

      In biology, we’d call this primitive, where single celled organisms don’t join together to help one another simply by doing what they like to do except for the eating/killing one another. There’s no trade. No payback. No expectation. No sacrifice. Just healthy, free living, with the small evolutionary change of doing it while in a group, instead of alone. It’s no more complicated than inhaling and exhaling, which just happens to use and generate resources that other species generate and use, for a more cyclical ecosystem. I naturally input what you make and you naturally input what I make. Only there is an indefinite number of yous and I’s.

  9. Captador Avatar
    Captador

    Memory ideas do remind me N. Luhmann’s (and others) concept of memes survival and social systems being an exchange or even a battle of hapenning distinctions through conversations between agents or just to future self. “Don’t make such decisions, it was stupid and individuals of the collective will fight against it again”

    1. Mike Levin Avatar
      Mike Levin

      The messages to a future self thing is very interesting and relevant. I’m not familiar with Luhmann’s work – can you give some references?

  10. Colin S Avatar
    Colin S

    Are you familiar with the phenomenon of paradoxical lucidity, specifically in dementia sufferers? Reading the bit about the caterpillar to butterfly, specifically, “So it’s no good to store a memory of leaves to be found on this color disc, because the butterfly can’t use it. It is irrelevant to the butterfly. It doesn’t map onto the new body,” made me think of dementia. Perhaps, loosely speaking, certain personal memories lost in dementia (that can suddenly return near death) are “irrelevant to the” new person? And something makes them relevant again near death?

    1. Mike Levin Avatar
      Mike Levin

      Yes! this is really important. I’m currently doing a review of a number of related phenomena, and working on a primary clinical paper of some specific cases of this type. There’s a lot here, I’m sure.

  11. Sonali Sengupta Avatar
    Sonali Sengupta

    An enthralling read! Thank 🙏🏼 you.
    Cognitive light cones of cells. Wondering how individual cellular cognitive light cones interact in a collective ? For example if a single liver cell has a specific cognitive light cone, then how do cognitive light cones of different types of liver cells interact, resulting in cognitive light cones of the collective i.e. the liver, in this case? Does the collective cognitive light cone correlate with the properties of the specific organ, for example the liver is a regenerative organ- akin to having a mind of its own, in comparison to the non-regenerative pancreas ?

    1. Mike Levin Avatar
      Mike Levin

      Yep that’s a great question. We’re working on this now, with some simulations and cell/tissue explants. I think these things are all present at once and both cooperate and compete.

      1. Sonali Sengupta Avatar
        Sonali Sengupta

        Wow !

  12. Captador Avatar
    Captador

    Autopoetic social systems might be a rabbit hole and pathway to mind alterating Laws of Form.

    Works are Social Systems, and Organisation And Decision:
    https://books.google.fi/books/about/Social_Systems.html?id=zVZQW4gxXk4C&redir_esc=y
    https://books.google.fi/books/about/Organization_and_Decision.html?id=xOlwDwAAQBAJ&redir_esc=y

  13. Heather Chapin Avatar
    Heather Chapin

    Fantastic stuff here! Do you think bioelectric signaling is how memory is perpetuated? Not as a physical trace per se, but as a physical mechanism in biological substrates? What are your thoughts on a kind of “memory” outside of biology (outside of bioelectricity)? (For example, Sheldrake’s morphic resonance)

    1. Mike Levin Avatar
      Mike Levin

      I think bioelectricity (whether in body or brain) is an ideal kind of architecture to interpret and re-map as needed information held in the reservoir of the cells’ biochemical and biophysical churnings. I’m in theory open to the possibility of other non-biological substrates for habituation and such, so bring on the experiments!

  14. Oleg Avatar
    Oleg

    >By the way, that’s a unique human trait – to have goals that are bigger than your lifespan.

    Aren’t the cases when ants or bees sacrifice their lives to save an entire colony from a threat examples of movement toward a goal beyond their individual lifespans?
    It seems to me that this is not a unique trait for humans, but a phenomenon that manifests itself in different species. The higher their sociality, the more clearly we can observe goal-setting that goes beyond the life of one individual.

  15. Mike Levin Avatar
    Mike Levin

    It’s not clear to me that the ant/bee behavior has the entire colony’s welfare as a goal. The colony’s goal, yes, and it’s willing to sacrifice its parts (like we sacrifice pieces of skin and other cells during sports). But an individual ant/bee, I am not sure. The way you recognize goals is that you make interventions and see what the system does. I have a feeling (but don’t know for certain) that if you set up scenarios where the whole colony would be killed if the ant/bee didn’t do something, they still wouldn’t do it. They have some fixed behaviors that do benefit the hive long-term, but their intelligent, goal-driven, flexible behaviors are mostly for short-term, local rewards, not for long-term, colony-size outcomes. But I could be wrong; more experiments needed!

    1. Oleg Avatar
      Oleg

      Thank you very much for the reply!
      Yes, I completely agree that the uniqueness of people lies in the flexibility in the time and spatial scale of goal setting of each individual.

      I’m afraid that even if we can do a hundred experiments where one ant can decide the fate of the entire colony, we simply will not be able to explain to the ant the cause-and-effect relationship between its choice and the fate of the colony, because the result of its actions will go beyond its very small scope of planning.

      Suppose we give a hungry ant a poisonous substance that seems tasty to him, but when he brings it to the anthill, he will poison half of the colony. Most likely he will bring it again and again. Because an ant has a simple short-term goal – to bring tasty food to the anthill, and an ant, unfortunately, will not be able to set himself a new longer-term goal (if such a goal has not already been formed in the process of evolutionary selection). On the scale at which people are accustomed to work and study things, such behavior seems rigidly fixed. But does this mean that other species cannot have goals (already established by evolution) that go beyond thier lifespan? I don’t think so, they just can’t set such goals for themselves in the course of their lives.

      (On the other hand, if we repeat this experiment for many thousands of years, we will probably someday get a colony of ants that ignore this tasty but poisonous substance, and thus we will come to the result that the ants have learned in a particular case to put the more distant goal of the colony higher than their individual immediate goal… but this is a different scale of learning and a completely different story :D)

  16. […] during heart-lung transplants in human patients (more on that in a subsequent post, meanwhile here are some thoughts on memory). Here are the ones I know about (plus the superb work of David […]

  17. Helen Asetofchara Avatar
    Helen Asetofchara

    >>think about people with brain damage that cannot form new memories

    Then you should also think about e.g. winners of memory contests showcasing the feats of spectacular memory, otherwise you’d be biased to a narrow sample of the range. It seems rather obvious that if memory championship contestants would use in their frameworks the statements about unreliability of memory, they’d hardly win. And the mnemonic techniques they describe do generally not contain such statements.

    Brain-damaged patients are convenintly considered as some type of golden standard of evidence in neuroscience etc, while generally they do not contribute to the development of human civilization.

  18. Mike Levin Avatar
    Mike Levin

    This is a very good point. I need to read up on what the data here actually are.

Leave a Reply to Sonali Sengupta Cancel reply

Your email address will not be published. Required fields are marked *