Videos of, and transcripts of Q&A after, a few of my recent talks

Published by

on

Here are a few talks, and most importantly, the transcripts of Q&A sessions afterward. There are often some great questions asked these talks, and I trim them from the lecture videos because not everyone wants to be online in videos. But I’ve transcribed the text, and anonymized them by re-wording and making them more generic, so that my replies can be seen below. Scroll down past the video links to see the Q&A text.

“Nature of the Self” – a talk for the Buddhist club at Tufts, October 2023

“Bioelectricity, Biobots, and the Future of Biology” – a talk given in November 2023

“Bioelectric Networks Underlie the Collective Intelligence of Morphogenesis”, a talk given to a neuroscience audience in March 2024:

And a transcript of the Q&A following that talk:

Q: I still have this question that we discussed before about what kind of phenomenon or things you might usefully apply the term mind to. So for me at a kind of bare bones level, a mind is something that allows you to think and to feel. And so I think that goes a little beyond flexible or complex problem solving. So to give you another sort of example, I mean, you had some very impressive ones that do things that are more flexible than joining two magnets together like these cellular processes by which you can generate, well, some sort of a circular arrangement with either multiple cells or single cells and so on. The immune system is another example. So it has, it obviously has some sort of collective intelligence, if you wish in that multiple units work together to achieve something, it can learn in a sense and that it retains information about past infections. It can solve the same problem in multiple different ways. So you can make antibodies to different antigens of the same pathogen and in that way solve the same problem in multiple ways. But none of what it does, intelligent though it may seem enters my mind. So it might be that it has a mind of its own, but I’m not aware of it. I’m not aware of what it’s doing beyond that, either I feel miserable when it’s failing and I feel better when it’s succeeding. But I don’t have no conscious awareness of what the system does. And that to me, so you can in a sense metaphorically call this intelligence but it’s, to me it’s something different. And in the same way if we scale up, then for me, collective intelligences are also in a sense more something metaphorical than real intelligence. So all the ants that are working together to, let’s say we’ve together some leaves, there are still as many intelligent entities in that bridge as their individual ants. So they all do their own thinking or feeling. There’s nothing that sort of emerges at the meta level that the swarm as a whole feels of things, no more than let’s say in a stadium wave in a stadium. There are tens of thousands of humans all going like this together and to an alien observer that might look like they’re now becoming a united individual, but it’s still as many entities thinking or feeling or enjoying the victory of their team as there are people in the stadium.

A: Great; there are at least four different points here. The first thing is that obviously there’s a reason why big brain or brains at all exist. So we have no evidence for long-term planning, language, there are many things that we have no evidence for in the simple system. And, for tractability, I’m limiting “intelligence” here to problem-solving – not play or other dimensions. But, overall, I believe that it’s a continuum. I don’t believe in a sharp distinction where you can say, okay, this is a mind and this is not. I much prefer the question of how much and what kind. Specifically the way that gets cashed out is how many of the tools that you normally use to study minds can be appropriated to study a given system.  

The second is your point about it being metaphorical versus real. This is a basic philosophical point, but I don’t believe in that distinction either. I think all we have in science are metaphors. I think everything is a metaphor. There are no molecular pathways – the “pathway” is a metaphor, for example. The only issue is: how useful is any given metaphor. We don’t have access to reality, we tell stories (metaphors) that help us organize our experience.

Your third point was an observer seeing the Mexican wave and deciding that that might be an agent. First, you cannot decide any of this by pure observation. You need to do perturbative experiments. And if you were to find some kind of a wave process in an excitable medium that can solve different kinds of problems and do new things, then you could assign some level and type of cognition to it, yes. But you have to do the experiments and find out what it can do. Now, a creature with an extremely long time span would see all of us as temporary metabolic patterns within an excitable medium. And if they don’t know how to interact with us in the optimal way, they would also see us as these transient blips of interesting and complex patterns but not cognitive. So I think all of this is a question of an observer, and can an observer do the right experiments to decide that some of the interaction modes that we use with other cognitive beings are appropriate here.

Your last point was that you don’t feel your immune systems, being conscious and feeling its way through the world. That’s true, but you also don’t feel my consciousness and me going through the world, right? And you don’t feel your right hemisphere’s experience and all the opinions that it has which differ from “yours”. We know that from split brain and other kinds of patients that there’s all kinds of stuff going on there that we don’t really have any access to. I would say that that’s completely expected and our bodies are home to a wide range of diverse cognitive systems. It’s nice that our left hemispheres right now are having this great verbal discussion but their smug feelings about being the only inner perspectives around are unwarranted.

And I think because of the conservation of both bioelectric mechanisms and algorithms, for exactly the same reasons that we think our brains are home to consciousness, we should consider that some of our organs may have some of that too. And I don’t think it’s surprising at all that we don’t feel them. We don’t feel that with each other either – we have to infer other minds from experiments and interaction. And that’s exactly what I propose to do with all of these other unconventional systems.

Q: when I was listening to the talk, I was more thinking about that your primary ideas, what is the utility of these different kinds of organisms, whether they’re really small or bigger, and what can these systems, what can they be applied to? Or are you also very interested in saying that they have some sort of level of intelligence, or you want to say exactly what kind of level of intelligence systems have?

A: It’s the latter. I want to say very specific things. When, to me, all of these claims about intelligence and minds and so on are fundamentally interaction protocol claims. So, what you’re really saying is, yes, you can train it, or no, it doesn’t remember past the next five minutes, or yes, it can do certain specific things. I’ve stayed away from issues of first person consciousness and so on, not because it’s not important, but just because it’s hard enough to get people to think about these functional kinds of things in non-brainy agents, so I often stay within 3rd-person observable, really tight experimental kinds of tractable approaches. To the extent that we understand the types of intelligence that something can deploy, we can better make use of them in the bioengineering and regenerative medicine. If I didn’t have these practical kinds of examples, then people would say same thing they say to panpsychists sometimes – what’s the use of this way of thinking about it? And I think it’s a very good question. I think any of these philosophical views have to help you with discovery and new capabilities, open new research programs, they have to be practically useful. I can answer the question, and give examples where because we applied tools from cognitive science, we were able to do A, B, and C in regeneration, birth defects, and cancer which hadn’t been done before.

Q: Is there even a way to measure that? Because just going back, like if you do talk about things like consciousness, I would somewhat agree with you that probably there’s all different kinds of levels, and there’s not like a singular level of consciousness, for example. But what does seem like an issue to me is that if you don’t have a big, like a system that can integrate information very well, right? Then it just seems very unlikely to me that their intelligence goes above anything that is directly programmed instructive.

A: Yes that’s a very good point. There is a movement in neuroscience which seeks to directly measure integrated information. Tononi et al. would say that the way you know emergent selfhood and intelligence is by looking at, at least in principle, a measurable quantity – integrated information. We do that in the lab. We study integrated information in the electrical activity of these cells, exactly as Tononi does in patients that are in comas and asleep and awake etc. But talking about the consciousness of cells and tissues is very difficult for the same reason that it’s difficult talking about it in fish and insects and so on. But you can use some of the same tools. Anesthesia, hallucinogens, optogenetics, etc. – all the tools carry over and do interesting things to behavior in anatomical space (morphogenesis). Guess what happens when you anesthetize an anatomical intelligence? You end up with a pile of cells that are perfectly happy individual cells but the collective is gone. They no longer pursue the appropriate goals and morphospace because they’re disconnected. And eventually that leads to a permanent dissociative break – cancer.

Q: How do you diagnose a hallucination in this case? Well, the same way that you can do it with animals – errors of perception. You know what the system normally does when presented with certain stimuli and what you can see is that it misperceives those stimuli and it ends up doing things in that anatomical space that are quite different, such as building organs that belong to the wrong species, or to no known species at all. You have to make specific hypotheses and test them. For example, we’ve made a worm whose anatomical intelligence has a perceptual bistability, like a Necker cube, with respect to its memory of what a correct worm is. And what they do is exactly what you would expect – every time you cut them, they randomly flip the one-head and two-headed – they’re permanently destabilized because they can’t pick one pattern in morphospace. And by the way, ant colonies also fall for visual illusions, not the individual ants, the colonies.

And that gets back to your point before about how many individuals are there in an ant colony. I actually think it’s not the same as the number of ants. I think it’s at least the number of ants plus one, because I do think that the colony has goal-directed and problem-solving behaviors that the individual ants do not have.

Q: A previous question seemed to imply that intelligence was somehow related to mind, that to have intelligence, there was a prerequisite to have a mind, to think, to feel. I was wondering what you thought of that to you about the distinction between intelligence and a mind, and also agency, and how those three descriptions related philosophically to each other, or in biological organisms?

A: For the purposes of most discussions, I try to stick to a kind of an engineering perspective. “Intelligence”, I usually limit that to problem solving, so I don’t study play, I don’t study creative exploration, although maybe we kind of do, but I don’t focus on that. So, I mostly focus on problem solving, which admittedly is only a part of what we understand by intelligence. And again, that’s fine because I think it is a continuum, and I don’t expect to find in basal forms of agential material everything that I find in advanced human metacognition and all the things that humans do. “Agency” is really important. I use that all the time because that asks the following question, how much of the toolkit that we have for understanding decision-making can we deploy? Cybernetics – how much of that toolkit is applicable?

I will give the following simple example. If you’ve got a bowling ball on top of a bumpy landscape, it’s going to roll wherever it rolls; the degree to which your – as the third person observer – perspective of the landscape tells the whole story is pretty much 100%. Your view of the landscape tells you everything you need to know where the ball is going to be or make it go somewhere. But if you’ve got a mouse on the landscape, your view of the landscape isn’t nearly as important as the mouse’s view of the landscape. So there’s an internal representation, maybe it got rewarded or punished at certain positions or whatever – your view isn’t that actionable or predictive. And you can generalize that quite a bit, to handle robotic vacuum cleaners, and cells and all kinds of stuff and ask how much agency does it warrant – how much do I need to know about its internal view of the outside world? What has been its history? Does it matter? Does it have any memory going back? And does it make an internal model of its environment? Does it do active inference? Does it do any kind of forethought?

So it’s about using all those concepts and making very specific hypotheses and then doing the experiment.  If you think it has a goal, you can put barriers in its way and find out what it does. Agency I think is really important because there’s a whole host of tools that you can use to ask how important is the systems perspective of the landscape. For cells and tissues, it is very important. You cannot treat them as a bowling ball. People have, and it’s led to many limitations in biomedicine and engineering. And we are now finally starting to see that we really do need to take into account what are they stressed by? What do they perceive as errors? What are they expecting? Cells do have expectations. Even algae can be surprised.

Q: would you like to explain a bit more, Michael, what you had in mind with the ant colony? I think there’s no question that social insects can do incredibly complex problem-solving things as a collective, but to, in my view, these are largely by hardwired behavioral modes. That might involve elements of learning, but I don’t yet see how one could meaningfully say that the collective thinks or feels something, but you might have a different view. So let’s hear what your idea was.

A: Ok. I’m not going to make any claims one way or the other because we don’t yet have the data yet. But here’s our very simple minded kind of experiment. So imagine an ant colony. I pick two locations. Location A is just a little platform where there’s a camera that’s watching how many ants are staying standing there. Location B is a place where a machine drops little droplets of food, of sugar water or something, depending on how many ants are standing over in a panel A. So the idea here is that no individual ant has the experience, “I go stand in this place and I get some food”. It is possible that what the collective would realize is that, hey, if I send a bunch of my parts to go stand over there, I can pick up some food over here. Basically what a rat is doing when it realizes that, I can do something with my foot and my gut will get the reward. So now if that were to work, I think you would have an associative task of some sort. And you couldn’t pin it on any of the individual ants – the owner of the associative memory would be the colony.

The final thing is that I think the skepticism, which is rightfully applied about any kind of collective being, I think we could easily apply that to ourselves – also a collection of neurons. And we have no way of knowing what any other collection of neurons feels like, except our own, and even not our own, because we don’t know what our right hemisphere is feeling and so on. So all of this is only inferred from behavioral experiments and observations and things like that.

Q: Thank you very much. It was a fantastic talk. Just to follow up on your very last example, which I found interesting. And I know nothing about insects, but could it be possible that the experience of one ant receiving a reward, somehow, you know, is conveyed or reflected to another ant in the same colony. So thinking like in human, you know, it is like me all neurons that allow us to empathize with others. Could there be any analog mechanism in insects, social insects?

A: Thanks; yes, there would have to be some way of information propagating. If there was truly no way of getting information from ant to ant, then none of this would work, for sure. And I think that there absolutely has to be some, for this experiment to work, there has to be some way of information spreading. But I don’t think that negates the point – all that means is that you found a mechanism that holds together the collective intelligence of the colony. I mean, look, what holds together our collective intelligence? It’s the ability of neurons to talk to each other. And when we are hit with general anesthetic, that is the pathway that’s targeted. The cells are perfectly happy. All the neurons are sitting there. But what you’re missing is the emergent self because that information is not propagating. And as was said before, the integrated information isn’t there. So I think you’re absolutely right, and it’s the cognitive glue.

Technological Approach to Mind Everywhere: a talk to a cognitive science/philosophy of mind group focused on representation:

and the Q&A following this talk:

Q:  I have a general question about how far down you want to push the description of agency to the components of a system. You said there, and you’ve written it as well, that we’re all made of subunits that used to be independent agents, and you show a single cellular organism that is an independent agent. But I would say we’re not, right? My muscle cells and skin cells did not used to be independent, free living things, right? They never were. Those actual cells never were like that. So in evolutionary history, multicellular things evolved from unicellular organisms that were free living agents. I’m fine with the idea that they have some autonomy, that they don’t have to be micromanaged. I think that’s a really important point and well made in your paper. But it feels to me like what makes multicellularity possible is precisely the fact that individual cells cede their agency or a lot of it to the collective. I wonder where you stand on that.

Q: Yeah, that’s a super important point. It is certainly possible that in making the transition to multicellularity, they have permanently ceded their individuality. If that’s true, then there will be no practical utility in using some of the tools and concepts that we use to deal with agency on those cells, right? So that’s definitely an empirically possible outcome. I don’t think that’s right, but it could be right. Here’s why I don’t think it’s right. I think the ceding of agency, and of course there are many adaptations where much like with parasites, you do give up a lot of autonomy once you get comfortable on an evolutionary sense with being provided for and all that. But fundamentally, have they lost their independent nature? I think the key parameter here is this cognitive light cone, which is defined operationally as the size of the goals you’re capable of pursuing. What I think happens in the embryo is you have cells that have ceded their autonomy because altogether they’re working on a larger goal. They’re not working on tiny goals like a bacterium, which is just local metabolic state and things like that. They’re working on this grand construction project they’re willing to die for. That’s all true. But I think that scale up of that cognitive light cone is not permanent and fixed. Because I think what happens in cancer, and we’ve targeted this as an anti-cancer treatment, is that the mechanism by which that cognitive light cone expands can break down. It’s a failure mode in which the individual cells literally revert back to unicellular behaviors. People have shown they start to express genes that are a billion years back for microbial transcriptional programs, and they do very clever adaptive things to fight back against their environment (the body and its attempts to exert control). I think that tradeoff is not permanent. I think multi-cellularity cranks it, but it can go back and forth.

Q: I agree with that. When I say they cede that autonomy and agency, I don’t mean they’ve lost it. I mean, it’s suppressed. 

A: agreed, and that leads to issues. For example, we’re doing work on stress. The idea is that under certain stresses, the cells have enough autonomy to say, this thing’s not working for me, I’m out. And they go down the cancer route because they had just enough autonomy to have that we-me distinction.

Q: hi Mike, thanks for this. I was intrigued by your anatomical compiler notion. And I think you said you want to impose goals onto a collective intelligence in morphospace. So my question is about the notion of imposing goals and the relationship with agency. You know, this is a question in AI as well, right? That people talk about imposing goals onto a superintelligent system. And there’s this debate about how much you can impose onto things that already have their own agency. So I’m wondering if that’s a question that makes sense – how much is it possible to impose goals on systems that have their own goals? And is there something like the orthogonality principle that any goal is compatible with any level of intelligence? What’s your thought on all that?

A: Yes, very interesting point. The first weird twist that I put on this is with respect to a gradient, or spectrum, of imposing goals. We can impose with a screwdriver, with rewards and punishments, with cogent reasons. All of this is goal imposition. As someone said, the act of writing, or speaking persuasively, is a violent act. What he was getting at is that to the extent that you’ve produced a cogent piece of writing, you are forcing certain ideas onto your recipient. They don’t have a choice once they see your logic, they are forced, constrained, and determined by their own commitment to logic and certain axioms they believe. Any kind of communication has an element of you modifying your recipient, your listener. So I think that it is true that goals and level of sophistication are somewhat orthogonal, although we do have work with Buddhist scholars who think there is some guaranteed alignment in certain ways. But the goal here is to find the most effective way to make your goals be the cellular collective goals. The way it plays out in biomedicine is that the vast majority of treatments don’t actually fix anything. Antibiotics do because they target the invasive, low agency invader. But we often address symptoms, not causes, and the symptoms return when the drugs are stopped. What I would like is to go one step deeper and push the cells to the point where the patterning state or physiological state is their new set point. I don’t think we’ll get there by giving them cogent reasons but I think we can do better than micromanaging it with chemistry. So, there’s a spectrum, and it requires you to guess correctly where on the spectrum your system is, because if you undershoot or overshoot, it will be a mess. Systems very low on that scale can only deal with goals like following energy gradients. Systems more to the right have richer ways of pursuing goals, requiring less micromanagement and more communication versus rewiring. Yes, I see one spectrum for it from beginning to end. And, on the right side of the spectrum, you should do less imposing and more “collaborating” – high-agency systems are better related to with more bi-directional relationships – friendship, love, open-ness to mutual change. At the left side, it’s mostly prediction and exerting control.

Q: Can I ask one more question? At what point is there a distinction between the system having its own goals versus carrying out the goals of some external agent? Is this a continuum?

A: I see it as a continuum because even in the ultimate example of a system having its own goals, which is us, how many of my goals are really my goals? How many were my parents’ goals, social goals, my wife’s goals? I don’t know. I think it is a continuum. We can’t put a sharp line on it.

Q: Do you think the word goal is used in different ways, like for an AI system winning a game versus a person winning a game?

A: They are different in that if you wanted to modify that goal, you would have to use radically different tools. I think they’re both goals, but you would have to use very different approaches to change them. In the case of a thermostat, I think a thermostat has real goals. Tiny goals, no second order metacognition, you’re not going to be convincing it of anything. But I think what a thermostat is doing is an atom of goal directed behavior.

Q: Thanks. I was going to ask you to link what I see as two different themes in your research – the first is you’ve mentioned talk of these bioelectric signals as having semantics, a bioelectric semantics. And then in other places, it’s described as a bioelectric code. So is the best way to understand what those are about using the word semantic? Are those high dimensional state spaces across different levels of organization? So is that how we understand what those are about? And then the second part is also your discussion of perspectives or observers and your recent work on poly computing. I’m just trying to link the perspective dependent interpretation of information and your usage of the word semantic when describing these bioelectric signals.

A: When I say semantics, I don’t mean they’re underlying language processing the way the brain does. I’m not making that claim, there’s no evidence for grammar yet. But I do think there’s an important aspect of the encoding and interpretation here. When we impose a bioelectrical pattern associated with eye formation and the cells build an eye, there is nothing about that voltage specific to eyes. It’s an arbitrary symbolic code. In fact, it’s a simple trigger, a prompt where we don’t have to say how to build an eye, where different stem cells go, etc. We rely on the system to interpret our signal in a particular way. There’s another interesting thing – the interesting thing about encodings like that is that it’s the pattern that matters, not what gene or ion channel got you to that voltage. You could do the same thing with chloride, potassium, sodium or protons – it does not matter. They react the same way to a pattern, no matter what gene underlies it. And that pattern is spatial across cells, not a single cell code. So I think it has the features you want from a code – some independence from the molecular details, interpretation by the surrounding machinery. I think we’re working on cracking the bioelectric code, and partly it requires asking what the recipient agent is paying attention to, and how much predictive processing they’re doing to determine how to interpret the signals they get (i.e., how much the outcome depends on their prior beliefs driven by prior experiences and baked-in setpoints).

Q: But is it going to be understood in the context of growth and development, not representational speak?

A: Well, I think it is representational in the sense that what it represents is a particular region of anatomical morphospace. If I show you that bioelectrical pattern, what does it represent? This region of morphospace corresponding to having exactly two eyes of this shape, etc. When you ask what these electrical networks are thinking about all day, this is what they’re thinking about – morphospace, the various possibilities.

Q: I’m going to ask building on the previous question. I love how you bring so many different aspects of biology together with a clear focus on agency, motivational drive, collective versus individualistic behavior at multiple levels – taking broad concepts from philosophy of science to perspectives in neuroscience while being aware of exotic neuroscience studies. I like every part of it, but it still feels like there’s something implicit not being teased out in what you’re looking for. The word “code” doesn’t appear as a noun in that piece, you use “encoding” and “representing” a lot, but it’s not clear when you’re using it in the soft or hard sense. But then in answering the previous question, you explicitly referred to finding some bioelectric code. I’m not convinced you can draw a line between the bioelectrics of development and of neuroscience at a biological level, though I accept your examples. But assuming the information is embedded in bioelectric fields in both cases, what would that code look like and how would it be read?

A: We have a a separate paper called “The Bioelectric Code” that is all about the code. We are at the very beginning of this effort, so I’m not claiming we’ve cracked it, we only know a few pieces. But the most general version is – you have a set of cells with some arbitrary bioelectrical pattern, and the question is what anatomical structure does that imply? What is the decoding? So I view this code as a map between bioelectrical state space and anatomical space, like how physiology in the brain maps to behavioral competencies in some complex way, although not worked out yet. That’s what we’re trying to do. We have a decoding for a small number of cases – the frog face, number of planarian heads. We don’t have one for the shape of the planarian head yet, though we know it’s there. That’s what I want from this anatomical compiler – you’ll sit down and say “I want a frog with six legs and wings and a propeller on top” and it should give the bioelectrical code messages to make the cells do that. Some people think that’s not possible, that there are developmental constraints where cells only build certain things. I suspect biology is very highly reprogrammable and if we understood the code, they would build basically whatever you want. The way the encoding works is that a collection of cells reads the bioelectrical pattern of itself, not each cell individually. Every cell is influenced by every other cell’s voltage. Certain patterns lead to specific gene expression patterns, cell behaviors and ultimately morphologies. We’ve traced that at a single cell level but you really need the collective dynamics. We can say things like “this makes a sharp border, this makes something round, large, small” – we’re getting there, the code is taking shape.

Q: Okay, thank you. My only philosophical question following on that would be, is it rational to approach that as a code as opposed to a biological shadow of how you arrived at the reasoning? But I have to go.

A: cool, we need another hour on that one. 


Transcripts made by an AI transcription software by Nick Sheuko.

10 responses to “Videos of, and transcripts of Q&A after, a few of my recent talks”

  1. wayne Lewis Avatar
    wayne Lewis

    In the first reply, you say ” all we have our metaphors, we have no direct access to reality ” agree strongly on the first half…on the second I’d make the case that our expectations IS largely direct experience of reality but because both our communications and conscious thought are dominated by language, we can only process information actively by metaphor.

    1. Mike Levin Avatar
      Mike Levin

      > direct experience of reality

      this concept is really on the ropes nowadays. Now that we know not only how little of reality our senses actually perceive, but also how much our retina, brain, etc. fill in (confabulate on the fly) and even back-date in our stream of consciousness, it’s very clear that we do not have any direct experience of reality (whatever that might be). There are many examples which I don’t have time to go into here, but what we experience is not just filtered, it’s largely created. More broadly, we, like all finite cognitive beings, have to pick a perspective and commit to ways of coarse-graining the data from that perspective that tells an adaptive story.

  2. Benjamin L Avatar
    Benjamin L

    Very cool. A few questions/comments.

    Regarding hardwired behavioral models: In the two areas where I’ve read about of hardwired/preprogrammed behaviors, emotion and motor behavior, those claims have turned out to be majorly exaggerated if not outright untrue. As a result, I’m skeptical of these claims in other areas. Has that been your experience as well?

    I think that some of the claims you make are definitely verified empirically in economics, if we’re willing to grant for the sake of argument that the economy is a form of diverse cognition with the price system as analogous to bioelectricity. For example, regarding the question of individual cells ceding their agency to the collective, when humans become part of the economic collective as governed by the price system, they do so without losing their individuality or free will, as we know by personal experience. And when we are separated from the price system (say, a camping trip), we know that there is no sudden dramatic phase shift where we regain our individuality. Similarly, the claim that the pattern matters, not the particular gene or ion channel, is also verified, or at least significantly bolstered, by economics.

    > So, there’s a spectrum, and it requires you to guess correctly where on the spectrum your system is, because if you undershoot or overshoot, it will be a mess. Systems very low on that scale can only deal with goals like following energy gradients. Systems more to the right have richer ways of pursuing goals, requiring less micromanagement and more communication versus rewiring. Yes, I see one spectrum for it from beginning to end. And, on the right side of the spectrum, you should do less imposing and more “collaborating” – high-agency systems are better related to with more bi-directional relationships – friendship, love, open-ness to mutual change. At the left side, it’s mostly prediction and exerting control.

    One interesting thing about the price system is that while it scales up goals hugely—I think there’s a very sensible argument that it, or the economy that it coordinates, is the biggest cognitive light cone on Earth—it isn’t very sophisticated in terms of persuasion. You persuade it with money (numbers): by paying it to do one thing and not another, it changes its behavior.

    So the goal is very simplistic in this sense, and yet the resulting system is also highly collaborative, requiring very little micromanagement, controllable by top-down arrangement of prices, trusting the individual people and business to build the assigned pattern because it makes sense for their own local situation to do so.

    As a result, it seems to me that a highly collaborative and capable system can also be very unsophisticated in its goals. The economy is huge, capable, and collaborative, but you talk to it via “buy low, sell high”, not complex reasons. Have you thought about a two-dimensional spectrum of cognition with persuadability on one axis and light cone size on the other axis?

    1. Mike Levin Avatar
      Mike Levin

      Well, there are certainly many examples of behavior that is hardwired – birds and insects that hatch from an egg being able to carry out complex tasks without having learned to do them etc. I think your point about the simple way to control a huge cognitive light cone is right on – one thing intelligent architectures do is enable very simple stimuli to have huge effects. A simple voltage value can cause cells to build an entire eye. They allow triggers, with the competency in the recipient, not in the signal being sent. That seems consistent with what you’re saying. It’s true that size of the cognitive light cone doesn’t directly tell you which triggers or other strategies will be possible, which means that it’s likely valuable to do a 2-D quad chart of persuadability vs. cone size. The problem is that for most of these systems we don’t yet know the persuadability because no one has tried (either because it’s too hard, or because no one thought of it).

      1. Benjamin L Avatar
        Benjamin L

        Comparable observations were made to incorrectly infer that the development of walking in infants is hardwired, so I’m skeptical of these kinds of claims. I’m interested in a view of things where nothing is hardwired or preprogrammed in a naive sense—instead, perhaps everything is very, very good at optimizing, at least within some range of interactions, and optimization can happen very quickly and with surprising demonstration of competencies.

        The fact that both bodies and economies are made of highly (and surprisingly) competent cells or people respectively united by a shared signaling system that permits the rapid transfer of accurate information suggests that this is a general recipe for creating complex systems of organization with large-scale goals. Perhaps this means that the challenge of creating such systems can be understood in terms of two problems:

        1. Identify units with appropriate competency. (Bearing in mind that many competencies of cells/humans was not visible until after they were connected through bioelectricity/prices).

        2. Identify an appropriate signaling system for coordinating the chosen unit. (There’s problem a reason cells don’t have a stock market.)

        I don’t know how you’d figure out 1 and 2 without simply experimenting a lot. Perhaps as AI advances, we’ll start discovery all kinds of ways to make complex, highly coordinated systems of organization.

        1. Benjamin L Avatar
          Benjamin L

          there’s probably* a reason

  3. Daniel Vilceanu Avatar

    Mike, I am a big fan of your work. I like your vision for the future of medicine. I am a pain physician. Chronic pain is an interesting condition with interactions between different agents (parts of the musculoskeletal system, nervous system, immune system). Currently, the therapies for chronic pain can be classified as top down approaches (e.g. pain psychology) or bottom up (e.g, injections, physical therapy). Every provider thinks their approach is the best. In my opinion, in chronic pain all agents are involved and it is fascinating to look at the interaction between them. There are all kinds of feedback loops. We should chat if you ever become interested in chronic pain.

    1. Mike Levin Avatar
      Mike Levin

      Thank you. I do have some connection to others working on chronic pain; while I’m definitely interested, I don’t currently think I have any new insights that others haven’t already had. If you have any thoughts on how my work could be relevant, please post them here and we can think about them.

      1. Daniel Vilceanu Avatar

        In my opinion the best way to think about the pain system is to consider it a danger signaling system. In chronic pain somehow this system stays active. Many people in the pain field consider chronic pain as a persistent activation of the nervous system. But is the memory of the original injury just in the brain or is it in the periphery also? In planaria it seems that there is definitely some memory stored in the body.

        Another thing totally ignored in chronic pain are the protective mechanisms that the brain can use to protect the areas in danger: activation of the immune system, autonomic nervous system, endocrine system, muscle tone. I think there are all kinds of agents communicating with each other. And they use their own language: chemical/electrical signals and who knows what else.

        An interesting pain condition is the complex regional pain syndrome (CRPS). You should read this paper: Moseley LG et al. Pain 2013 Nov;154(11) 2463-2468. People with CRPS have cold or warm extremities. It is like their thermostat setting are changed for that limb. In this paper they took people with cold CRPS in hands and created the visual illusion that the affected/non affected limb is on the other side of the body. The actual temperature in the hands changed accordingly. When the cold hand seemed to be on the other side the temperature increased.

        1. Mike Levin Avatar
          Mike Levin

          thanks, I will have a look!

Leave a Reply

Your email address will not be published. Required fields are marked *