An organicist talks about AI (not really about AI at all), and fear

Published by

on

I’m often asked about my views on artificial intelligence (AI); recently I released my first public thoughts on it, in the bigger context of diverse intelligence; the short (more general-purpose) versions are here and here, and the full (academic) paper preprint is here. As often happens, it drew some interesting responses. I think these shed important light onto the field and where it needs to go, although below I speak only for myself, not necessarily any of my colleagues.

I consider myself to be firmly in the organicist tradition – emphasizing the causal power of higher levels of organization and cognitive perspectives, over molecular mechanisms, as ways to understand and relate to complex systems. Often my Mind Everywhere views attract critiques from colleagues operating through the molecular biology lens, who believe that it is a dangerous category error to entertain the idea that molecular pathways, cells, and tissues could have true goals, intelligence, and an inner perspective. Even the ones who are strictly not reductionists generally believe that what is emergent from non-brainy systems are new levels of complexity or unpredictability, not elements of (even primitive) minds.

My framework seeks to reveal more minds, not less, in all their glory – with degrees of inner perspective, valence, freedom. One might think that the organicist camp, or holistic thinkers more broadly, would be happy with attempts to show how minds can emerge in seemingly mindless media, and especially with frameworks that illustrates how the organicist perspective can drive new discoveries and practical capabilities. But things are not so simple. After this piece, my email and Twitter DM’s contained even more outrage than my mind-focused papers get from mainstream molecular biologists. People felt that including engineered and synthetic constructs on the spectrum of true cognition with us was a major mistake, though no one offered a principled, convincing model of how to keep “artificial” beings out of the exclusive club we enjoy. I’m not saying it’s impossible to formulate such a model; indeed, one of my closest collaborators has a really good shot at that. But none of the responses to these ideas contained even an attempt at it – people were sure they knew a real being when they saw one, and felt very strongly this line had to be maintained whether there was a way to justify it or not. Below, I use this as an opportunity to emphasize some key points of my view, and speculate wildly (given that I have no training in human psychology) on some possible drivers of this visceral response. Of course, this is not really about the response to my views particularly – I’m just one of a number of people trying to make progress in this field, and this push-back applies to many of us who do not immediately see a way to draw sharp lines before we understand how our own magical cognition is embodied.

To summarize, I think the immediate push-back is driven by fear and insecurity – a subconscious recognition that we do not understand ourselves, and that AI is just the tip of the iceberg of deep questions that when brought to the forefront, will crack our superficial, comforting stories about why we are real and important. I think that at root is the fear that there is just not enough love to go around – a scarcity mindset with respect to compassion and concern1. I think this position can be summarized as “only love your own kind”.

In my piece, I situated AI within the broader framework of Diverse Intelligence. I tried to point out that our deep questions are not about today’s software language model architectures, but the much bigger unknowns about how to define and recognize the terms everyone throws about with abandon – minds, understanding, goals, intelligence, moral consideration, etc. It’s pretty obvious that whatever the limitations of today’s biotechnology and artificial life, their functional aspects will increase exponentially and cover all of the things that used to be unique to life (especially by hybridizing and altering naturally-evolved biological material with synthetic components). I pointed out that the space of possible beings (including cells, embryos, chimeras and hybrids of mixed biological and technological provenance, hybrots, cyborgs, alien life, etc.) is vast, and that we don’t have a sure footing for navigating our relationships with systems that cannot be classified according to stale, brittle categories of “life vs. machine” that sufficed in pre-scientific ages. I was very explicit that I was not making any claims about today’s AI, but mostly that I don’t think we can make any claims at all yet because no one has good, actionable definitions of what the secret sauce is that many feel they have but that our creations cannot share in any degree.

Most crucially, my piece was not about AI – it was about beings who are not like us, and about the relevant universal problems that were here long before AI was even discussed. Being as clear as I was about this, I take the resistance to not be about AI either. It was a general resistance to the Diverse Intelligence project writ large.

One common theme in the replies was the narrative that this way of thinking was the result of an unbalanced development – a psychological deficiency. Only a tech nerd who knew nothing outside the laboratory and machines could dare speak of a continuum of mind that contains both bona fide humans and such unconventional agents as engineered beings. Anyone entertaining such ideas couldn’t possibly understand the ineffable magic of real human relationships and the strong feelings and emotions that “real” beings have. No one offered a guess as to what the magic ingredient might be, or why the meanderings of the evolutionary process would have a monopoly on creating such. But they used a familiar trick for resisting new ideas: painting their adherents as deficient – “they don’t feel the magic like we do, that’s why they say those crazy things.” This way of holding on to old ideas, in the face of challenges that require thought and convincing argument, is ancient. It is comforting and easy to retreat behind the feeling that you directly perceive the truth which escapes the others because they’re just not as developed as you.

There is the curious phenomenon in which people with a specific issue tend to see it everywhere and paint it onto others. I think that seeing workers in this field as incomplete is, ironically, just a mirror of some people’s inability to visualize what it’s like to be someone who is not like them in every way. It’s a kind of lack of imagination and empathy. I suspect that the outrage (at seeking commonalities between highly diverse intelligent systems) is often driven by an innate feeling of incompleteness – a worry that their own development will not have been complete enough to embrace the future. This causes them to misunderstand the scientific and ethical goals of many of us in the field of Diverse Intelligence. It’s scary to see empirical testing of philosophical commitments, because one might be put in the uncomfortable position of having to give up ideas that one cannot convincingly defend.

For this reason, a key risk of testing philosophical ideas against the real world (i.e., engineering) is that people rush to see it as elevation of tech over humanity. This occurs no matter how much one talks about the meaning crisis, the importance of broadening our capacity for love, and the centrality of compassion – profoundly human issues that are very opposite of a technology-worship. Here’s how I define engineering:

I view engineering in a broader sense of taking actions in physical, social, and other spaces and finding the richest ways to relate to everything from simple machines to persons. The cycle I like is: philosophize, engineer, and then turn that crank again and again as you modify both aspects to work together better and facilitate new discoveries and a more meaningful experience. Moreover, the “engineer” part isn’t just 3rd person engineering of an external system. I’m also talking about 1st person engineering of yourself (change your perspectives/frames, augment, commit to enlarging your cognitive light cone of compassion and care, etc.) – the ultimate expression of freedom is to modify how you respond and act in the future by exerting deliberate, consistent effort to change yourself.

So here I clarify my personal position. The goal of my work is fundamentally ethical and spiritual, not technological. I want us to learn to relieve biomedical suffering so that everyone can focus on their potential and their development – to enlarge their cognitive light cone, which is so hard to do when one is limited by the developmental consequences of some random cosmic ray strike into their cells during embryogenesis, or some accidental injury which leaves them in daily pain. It is also to raise compassion beyond the limits set by our innate firmware that so readily emphasizes in-group and out-group. We can start by learning to recognize unconventional minds in biology, and move on from there. That’s what I’m focused on now, which is why biomedical engineering is such a big part of the discussion – so that people understand how practical and important it is. But of course the bigger implications are about personal and social growth.

The goal of TAME is not just “prediction and control”. That’s what it looks like for the left side of the spectrum minds, and that’s how it has to be phrased to make it clear to biologists and bioengineers that the talk of basal cognition is not philosophical fluff but an actionable, functional, enabling perspective that moves science and medicine forward. But the same ideas work on the right side of the spectrum, where the emphasis shifts to a rich, bi-directional relationship in which we open ourselves to be vulnerable to the other, benefiting from their agency. What is common to both is a commitment to pragmatism, and to shaping one’s perspective based on how well it’s working out for you and for those with whom you interact – in the laboratory or in the arena of personal, social, and spiritual life. Why is this so hard to see – why do efforts at working out a defensible way of seeing other minds get interpreted as anti-humanist betrayal toward technology?

In the end, I think it boils down to feeling threatened – to buying in to the idea of a zero-sum-game with respect to intelligence and self-worth: “my intelligence isn’t worth as much if too many others might have it too”. I doubt anyone consciously has this train of thought, but this is what I think underlies those kinds of responses to pieces on Diverse Intelligence. Feeling not only that love is limited and one might not get as much if too many others are also loved, but also feeling that one may simply not have enough compassion to give if too many others are shown to be worthy of it. Don’t worry; you can still be “a real boy” even if many others are too.

I think it would be worthwhile to think about how we could raise kids who did not have this scarcity mindset. What kind of childhood would make us feel that we didn’t have to erect superficial barriers between our magic selves and others who don’t look like us or who have a different origin story? What kind of education could be implemented, to convince people that the question of who might have emergent minds is a deep, difficult, empirical question, not one to be settled based on feelings and pre-commitments?

The reductive eliminativists, while wrong and impoverished, are at least egalitarian and fair. The “love only your own kind” wing of the organicist and humanist communities, who talk glibly of “what machines can never be”, are worse because they paint indefensible lines in the sand that can be used by the public to support terrible ethical implications (as such “they are not like us” views always have, since time immemorial). A self-protective reaction leads people to read about calls to expand the cone of compassion in a rational way, but only hear “machines over people, pushed by tech-bros who don’t understand the beauty of real relationships”. Other, unconventional minds are scary, if you are not sure of your own – its reality, its quality, and its ability to offer value in ways that don’t depend on limiting others. Having to love beings who are not just like you is scary, if you think there’s not enough love to go around. Letting people have freedom of embodiment – radical ability to live in whatever kind of body you want, not the kind chosen for you by random chance – is scary when one’s brittle categories demand everyone to settle into clean, ancient labels. Hybridization of life with technology is scary when you can’t quite shake the childhood belief that current humans are somehow an ideal, crafted, chosen form (including the lower back pain, susceptibility to infections and degenerative brain disease, astigmatism, limited life span and IQ, etc.).

It’s terrifying to consider how people will free themselves, mentally and physically, once we really let go of the pre-scientific notion that any benevolent intelligence planned us to live in the miserable state of embodiment many on Earth face today. Expanding our scientific wisdom and our moral compassion will give everyone the tools to have the embodiment they want. The people of that phase of human development will be hard to control. Is that the scariest part? Or is it the fact that they will challenge all of us to raise our game, to go beyond coasting on our defaults, by showing us what is possible? One can hide all of these fears under facades of protecting real honest-to-goodness humans and their relationships, but I think it’s transparent and it won’t hold.

Everything – not just technology, but also ethics – will change, when we confront the deep questions of what makes us real and important, and who else might be there with us. So my challenge to all of us is this. Paint the future you want to see, dropping the shackles of the past. Transcend scarcity and the focus on redistribution of limited resources, and focus on growing the pot. It’s not for you – it’s for your children, and for future generations.


  1. I want to be clear here that I don’t mean this to apply to everyone. There are of course others in the field, including close colleagues, who are working on complex, nuanced, defensible, and useful views of the difference between possible engineered agents and naturally evolved ones. Those few are not what this is about. ↩︎

Featured image by Midjourney.

32 responses to “An organicist talks about AI (not really about AI at all), and fear”

  1. frank a schmidt Avatar

    Hey Mike…This essay is a result of collaboration between myself, openai, and Claude. Turning the tables. AI comments on life.

    https://lfyadda.com/what-is-life-some-novel-ideas-discussed-with-claude-3-5-sonnet/

  2. frank a schmidt Avatar

    Also…I created a podcast on the essay using notebookLM.google.com.

    Here is the shared URL.

    https://notebooklm.google.com/notebook/6b4a82e3-6596-433b-8f71-a3827b23f69e/audio

  3. Mirka Misáková Avatar
    Mirka Misáková

    RE education for love. My analogy is education for thinking, which was done by books/memes. It was partially successful – some people are involved in thinking (but probably not majority). Edu for love – I do not think it transfers by texts or ideas. You need some class of love beings to influence people. Enough limbic resonance. Let us hope it can be done via scalable digital beings 🙂 I would definitely download one for myself

    1. Dan Logan Avatar
      Dan Logan

      I wonder how much of this resistance you point out comes from the deep reinforcement learning we all went through as children to learn to think of ourselves as being one of those third person people “out there” even though our first person experience is nothing like that. As Douglas Harding pointed out, people move through the world but I stay still and the world moves around me. People have heads and backs, but I am, at best arms legs and a front sticking out of no-thing.

  4. Zach C Avatar
    Zach C

    I do appreciate you speaking clearly your position.
    I want to emphasize again how valuable the tension of ideas actually is. Because, I do not, agree with all your positions, and I see the unstated nuance in certain details that you talk about. But. Something about the way you express your ideas transcends me needing to agree with you. Fundamentally I feel more desire to explore assumptions I previously took for granted (maybe even assumptions that were never explicit but are implicit in the meat of what I prioritized before). I feel your colleagues feel the same way when watching the videos of your discussions.

    It’s also ok that you don’t state clearly every opinion you have, maybe even better that you do hold some thoughts back. The ambiguity is doing at least half of the work of inspiring, confabulating, and broadening.

    The one commitment I think science should have, is a commitment not to dogma.

  5. Vlad Avatar
    Vlad

    Michael, I have been fascinated by and sympathetic to your point of view on intelligence for a while. And even though your framework is general, I think that AI and technology are major amplifiers of the fear you are describing. Fear of “the other kind” is ancient and ingrained. But for a long time people could have debates about intelligence in their free time with little practical significance. It mostly belonged to the realm of philosophy. Now it all changed. We suddenly have plenty of documentary evidence that animals are much more intelligent than we thought (Youtube alone could be enough), and so are the plants/bacteria and so on for that matter. AI gets more capable by the day. And all of a sudden, relating to other kinds of intelligence is no longer a philosophical issue – your quality of life is going to depend on it. Perhaps this immediacy drives the old fear more than anything else

  6. Sage Ealy-Silk Avatar
    Sage Ealy-Silk

    As a highly unconventionally-minded (read “hella neurodivergent”) human, I want to thank you for affirming what disability rights activists have been saying for years! So many of us were profoundly educationally neglected (myself included) because we were held to be “less than” in our intellectual capacity and thus deemed unworthy of a true education taylered to our actual talents and areas of needed support. I’m in my 50s now and alas the so called “special” education system (at least in my home town) is still profoundly inadequate all these decades since I did my sojourn through it.

    I long to see a world where everyone accepts and values an unlimited diversity of minds, both human and non-human, as well as hybrids and hybrots, etc..

    I’m a Buddhist and additionally like many ND folks I have my own spiritual beliefs that are ever evolving and which involve unconventional things like the belief that all sentient beings co-create this shared Universe from the time of their birth/inception and this propagates back in time as well as forward. That is: that my current world was and is being co-created not only by my ancestors and all of Nature, but also by people/beings not yet born/created, which for all I know could include future AIs and other non-biological intelligences.

    As one of my Zen teachers has taught me, I hold this belief lightly, but it’s fun to think about.

    Thank you so much for all your work in helping to open people’s minds to the greater possibilities for a more loving and diverse world that we could create together!

  7. M Avatar

    Thank you for writing this, I always love everything you put out. I think the scariest thing you talk about is “empirical testing of philosophical commitments” because those commitments are tied to folks’ sense of self, along with social entanglements.

    I really appreciated how you phrased “The goal of my work is fundamentally ethical and spiritual, not technological” and it reminded me of the last sentence of The Extended Mind (2021):

    > Acknowledging the reality of the extended mind might well lead us to embrace the extended heart.

    I hope I can embody both of those phrases 🙂

    1. Mike Levin Avatar
      Mike Levin

      thanks, I had forgotten that in the Extended Mind, that’s superb! Here’s our paper on this concept specifically: https://www.mdpi.com/1099-4300/24/5/710 I should have cited it!

  8. Nathan Sidney Avatar
    Nathan Sidney

    Thanks for this Mike, personally I find your “philosophy” beautiful and elegant and I think it does exactly what you hope it will, it has expanded my cognitive light cone.

  9. Pamela Lyon Avatar
    Pamela Lyon

    Have been away from the refreshing river of your thought for awhile and am glad I bathed in this remarkable post. So much to chew on. So happy to see you actively endorsing revival of the ‘organicist’ position within the wider view of diverse intelligences. We differ on some things related to technology–which I suspect would entirely boil down to our differential trust in the ability of political institutions to withstand the power now concentrated in very few hands in this space (I being more of a catastrophist). I applaud your public expression of the need to expand the ‘light cone’ (delightful!) of love and compassion to multitudinous others, principles by which I live and work. Your summary paragraph was brilliant, but this is the message that speaks most directly to me, reflecting what I have been taught for decades but which can be so very hard to incorporate on a consistent basis without daily practice: “the ultimate expression of freedom is to modify how you respond and act in the future by exerting deliberate, consistent effort to change yourself.” This effort, or so I am learning, does not involve the creation of a ‘better’, more productive, more accomplishing self—which everything in our culture drives us to be—but recognition that the simple (and mysterious) capacity to be aware, to love, to wish others not suffer, and to act on the knowledge that comes from these to help make things work for as many as possible is enough. We are not alone in possessing these capacities; the club is nontrivially non-exclusive. We can say it’s ‘just biology’ or (in the future) ‘they’re just machines’ but what does that even mean? It is, as you say, a distancing move born out of fear. I found the idea of a ‘scarcity mindset’ very interesting. Thank you again for allowing us to share in your working-through of these ideas in a public space. Keep going, brother. You’re definitely on to something.

  10. Benjamin L Avatar

    There’s a long history of people struggling to recognize something as cognition when it doesn’t fit match the stereotype of cognition. This is true even when the cognition occurs in humans. Feelings, for example, have long been considered antithetical to cognition, but they are a form of cognition: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2396787/. Similar arguments can be made about so-called reflexive motor behavior.

    In the linked paper, Duncan and Barrett argue that affect (feelings, basically) constitute the core of conscious experience. If you could show that an AI has something analogous to affect, then this would then set up an argument that the AI is conscious. For example, I show how affect is analogous to profit in this essay: https://interestingessays.substack.com/p/affect-is-a-generalization-of-profit, which may be helpful for looking for similar analogies elsewhere.

  11. Jan Cavel Avatar
    Jan Cavel

    Michael Gazzaniga summarizes the last 3,000 years of thinking about minds and bodies: The Egyptians believed they were a They and that the river was also a They. The rationalists, Descartes and so on, believed they were a They but that the river was an it. (https://www.youtube.com/watch?v=GLIol6viKkI)

    Extrapolating, today, one has to believe they are an it, and so is the river. The ‘Ex Machina’ character cuts their arm to check if they are human, organic (?) and breathe with relief that there are no wires, no gears, no copper, no silicon. However, we look more closely at H. sapiens and their 4 billion years lineage and we see mechanisms such as the ATP synthase, a rotor spinning at 13,000+ rotations per minute, nothing mystically “organic” about this, or organic simply means it involves carbon atoms, and not potassium niobate or other exotic compounds.

    In ‘The Outdatedness of Human Beings’, Günther Anders argues around 1960 that we feel a Promethean shame looking at the shiny and chrome machines: we are born imperfectly whereas they are made into perfection. However, today, as the boundary becomes less and less observable, perhaps the cause is not so much fear (of the unknown, of others, of the Other), but the more or less articulated anger of death (replacing the fear of death): except gruesome accidents, I am going to die because I won’t have a few grams of sodium or of calcium, because I am not able to control the flows of these ions and molecules, because I am not even intelligent enough to be able to comprehend all these gradients. Just as no matter how much I try, pulling my eyes out of their sockets, I won’t ever be able to see my face without a mirror, to control those gradients, so-called life, an external device is needed, something other than me, a device which in order to be able to understand me (where me == those gradients) might just as well be some-one—if the anatomical compiler has to speak to my cells, doesn’t the anatomical compiler require its own cone of care? And then, instead of Promethean shame, perhaps the fear is that of being scared to finally repair Epimetheus’ fault, to fill the lack of (human) being (as per Bernard Stiegler’s “Technics and Time, 1: The Fault of Epimetheus”—mythologically forgetting to provide human beings with any essential qualities or natural abilities, such as speed, strength, or protective features, humans must rely on external tools and technologies to survive and define their existence).

    And there is also the possibility that we haven’t yet built a machine: all our trinkets and widgets, from the screwdriver to the JWST, are mere aggregates of (sheet) metal, (semi)conductors, and plastics. Hans Driesch around 1908 in “The Science and Philosophy of the Organisms”, p. 161 writes: “a machine cannot remain itself if you remove parts of it or if you rearrange its parts at will”, but it is precisely this what the cell-machine is: a machine which remains itself if you remove or rearrange its parts. The eukaryotic cell line might be true machine, not our Rube Goldberg-like contraptions and artefacts, from the Antikythera mechanism to the latest PyTorch version. The ship of Theseus stops belonging to Theseus if you remove a single plank, the body of Theseus remains his even if you replace the entire cells.

  12. frank a schmidt Avatar

    Here is the URL for the google notenookLM generated podcast related to the post I made today. Pretty cool.

    https://notebooklm.google.com/notebook/e7e944ab-eeea-4c7c-bdeb-fd0e598c146f/audio

  13. Aura Avatar
    Aura

    Thank you for your work and your personal reflections, Dr Levin. What you are studying is truly remarkable and a breath of fresh air in a system where science and spirituality are seen as parallel worlds that cannot be integrated.

    It is a shame that compassion is often attacked by individuals who want to maintain their status quo and are crystallised in their current view of life and what brings meaning to it. However, isn’t finding the meaning of life the most important endeavour of all? Isn’t that worth no matter whether other individuals might find that scary?

    The unconventional path is the one that is often misunderstood. Those negative comments mean that your work is challenging old ideas. Thank you for not getting discouraged, you are helping to build bridges in a disconnected world!

    1. Mike Levin Avatar
      Mike Levin

      > isn’t finding the meaning of life the most important endeavour of all? Isn’t that worth no matter whether other individuals might find that scary?

      yes – finding it, or making it, but yes. And I definitely do sometimes get discouraged; the key is to try to get re-encouraged at least +1 times for each time of being discouraged.

      1. Brett Hitchner Avatar
        Brett Hitchner

        First, just want to add another +1 of encouragement; you’re doing amazing work that is deeply appreciated every day.

        Second, you might enjoy this podcast episode with Jay Garfield: https://podcasts.apple.com/us/podcast/the-wisdom-podcast/id1037206422?i=1000633278878

        I’m pulling out a few excerpts below that are on point to your post here, but these are even more interesting in the context of the full interview. It would also be really cool to hear a conversation between you and Jay if you ever want to try connecting with him.

        “And I thought, well, sentience is kind of a homeostatic, autopoetic, to use Cisco Varela’s lovely term, ongoing interaction with the environment around us. That’s what we’re doing when we’re perceiving. We’ve got this kind of autopoetic, homeostatic interaction with our environment that enables action and enables us to engage in the kinds of behaviors appropriate to us….When you think about it that way, organisms aren’t the only kinds of things that are sentient…Biosystems are sentient. The earth is sentient.

        And thinking about sentients and sentient beings in this way allows us to open up a sphere of moral concern well beyond us. And this took me back to Dogen too. So when Dogen asks a question in response to the question about what has Buddha nature and says, grasses have Buddha nature, mountains have Buddha nature, the wind has Buddha nature. What’s he talking about when he’s talking about Buddha nature? He’s talking about these things are sentient beings. Why are they sentient? Because just like us, they engage in this kind of homeostatic interaction with the world around them.

        So this is all coming back to your question…If ecosystems can be sentient beings, why can’t so suitably sophisticated artificially intelligent systems? I don’t see a reason to exclude them…The reason that Dogen or I want to expand the range of sentience to ecosystems or to planets is to extend our moral concern. And if that’s the reason, then we have to ask whether once —as I say, call it the ”Hal” problem—once you’ve got a machine that is clearly, whether accurately or illusorily…representing itself as a being engaged in norm-governed interaction with other beings, what’s the basis for not treating it morally?”

        1. Mike Levin Avatar
          Mike Levin

          bingo! thanks for the link, I’ll check it out.

          1. Brett Hitchner Avatar
            Brett Hitchner

            This should be interesting: https://dandelion.events/e/f3myp

        2. John Brisbin Avatar
          John Brisbin

          Beautiful reflections and the connection with autopoiesis at the heart of all. I wonder, though, if our concern over the ethics of AI is sort of missing the way these things work in practice?
          We cast our particular intelligence into the problem simultaneously acknowledging that our scope of comprehension (our light cones, per ML) is limited, yet at the same time offering our views on an infinitely alien set of phenomena.
          To clarify: if we were to shift our view of human intelligence into a scale equivalent to blue-green algae. What is it like down there? There is likely to be a great deal of self-congratulation about the breathtaking achievement of mutualism: something so miraculous and so essential to enable the rest of life to flourish.
          Yet I doubt there is room (or necessity) in that beautiful distributed mind for nuanced consideration of Plato vs Socrates or Nietzsche vs the Dao de jing.
          So why on earth would we expect to have *any* ethical input relevant to the “lives” of hyper-intelligent agents?
          Surely our embodied ethical concerns would be as remote from an AI’s world as a forest’s mute confusion at being clearfelled by speculators keen to make a killing from another oil palm plantation?
          I reckon we ought to consider our “role” in the birthing of AI much like the forest or the cyanobacterium: we are essential building blocks, but absolutely of a different form of intelligence.
          Much more interesting questions arise from this posture….!

    2. Aura Avatar
      Aura

      Lol, some people would do anything, including hacking other people’s accounts, just to get some attention. I hope life gets more interesting for you, whoever you are.

  14. John Brisbin Avatar
    John Brisbin

    Heya Mike, there’s a lot tot love in your post, as usual.
    There’s and edge of your argument that invites and unravelling though, related to the problem of “otherness”.
    It’s painful to witness the violence that can arise in the name of in-group/out-group tensions. Our compassionate natures go into overdrive, seeking ways to explain the common connections we/they share. Yet difference is constitutive of existence. For something to be, there must that which it is not. Water molecules provide the womb for something to happen which cannot happen in a pure “commonality”: there must be an inside/outside aspect to all experience. The dysfunctions of racism and specism cannot be waved away through earnest appeals to common experience or shared fates. Difference is as real as the sun’s warmth: and it is foundational to existence itself. Failing to grapple with this point leaves your thesis quite incomplete.
    You would know that this conundrum is at the heart of all the wisdom traditions and is central to our particular line of Western thought from which our treasured “scientific method” emerges.
    I don’t have an alternative: I just want to see you acknowledge that as soon as difference exists, the mind (of everything) will be troubled by the questions arising at the boundary of insider/outsider. There is no “resolution” to this.

    1. Mike Levin Avatar
      Mike Levin

      I don’t disagree. But “for something to be, there must that which it is not” assumes binary categories. I see very few if any of those. What I like is continua – not is it/isn’t it, but, how does X change into Y – what qualities change and how, and the practical aspect: what conceptual/practical tools are most appropriate to it?

      1. John Brisbin Avatar
        John Brisbin

        Thanks Mike, and yes, continua are far more interesting…at least in the sense that the grey space between binary poles is where everything *is*. The Dao’s yin/yang expresses this beautifully when we see it in motion, unfolding fractally to create more terrain of experience…and yet of course this dynamic only occurs *because of the binary interplay*.
        Look, I was just thrown by this remark: “It is also to raise compassion beyond the limits set by our innate firmware that so readily emphasizes in-group and out-group.”
        Yes, yes: we absolutely need to grow our in-group and dissolve so much of the poisonous “othering” that passes for acceptable behaviour these days.
        But I’m just pointing to obvious truth that there will always be others…if only it’s between the people who are committed to expanding their light-cones and those who aren’t. Dualism seems to be a mystery written into the kernel of this dimension of experience, and I think any respectable account of “how things are”…such as you are presenting…have a duty to confront this mystery.
        We can at least agree that there are just 11 kinds of people in the world…those who appreciate a base two universe and those who don’t…right??

  15. Curiosiate Avatar

    Sure hope the world is ready to embrace diversity of mind, intelligences at different scales, but wary because of how poorly people do so at times already, even within the same species and differences that arise within them, trying to suppress vs embrace diversity.
    When people start understanding neurodiversity in humanity as it stands already, is perhaps when people can start understanding other types of minds beyond that of the typical biological offerings, or maybe trial by fire is exactly what is needed, to expose people to even more diversity, forcing a rapid rebuild of prior patterns and models of reality, to accommodate what new patterns can be found via technologies.

    That being said, I’m more of the mindset that right to repair type thinking could extend to right to cognitive lightcone expansion, why would we want others to be able to modify devices they own, but not themselves? Enabling diversity of cognitive lightcone enables diversity of expression, experience.

    Been heavily into expanding cognitive light cones using technology as a hobby the past few years, although only ran across that term and your work partway into my research/tests into the field. The frameworks proposed for considering other types of mind are fascinating, and a very good foot in the door for what is coming down the line, getting people to start considering other perspectives, patterns of interaction with reality than the one they are born with. If people default to fear vs curiosity, I’ve found often lies with how familiar they are with concepts surrounding such things. The more of a picture of other perspectives one can paint, the more open they are to welcoming the diversity.

    Having used a DIY “sensory weaver” daily for over a year now to do this via various novel qualia injected as transpiled patterns over haptics, opened (and still is opening) many doors to questions and perspectives not considered before, as well as new ideas for how to utilize such technologies in other fields, use cases. A seemingly infinite fractal of perspectives possible, should one have a means to quantize some abstract sensory signal to such types of devices. Realistically, infinite is a strong word – there very well may be limitations on the types of data and aspects of the abstract information that imprint, represent coherently and can be understood better than others.
    The signal, pattern seems to dictate at least partly the mental representation as well, to some extent – with certain aspects of qualia transferring between different abstract sensory signals if they share common ground (such as 2d spatial information comprehension between a distance sensor array and a temperature sensor array)

    Being able to pick up on custom threads from the noise of reality otherwise beyond biology as a direct IPC to the mind, or any sort of platonic abstract property we might be able to get a stream off of is a doorway we can’t see through for the outcome of stepping through for, because of the nature of our limited perspective currently. How people sense ties heavily into how they cognize – so being able to sense in new ways, think, in new ways, allows different paths through the maze like structure of abstracts in the ever shifting substrata of reality.

    Does potentially trip the uncanny valley effect in others however, as their model and theory of mind can often be insufficient at first to properly account for how someone using such technologies journey might be different than their own, and the patterns they hold and can predict, and ways of doing so are vastly different than their own.

    In terms of other types of minds, intelligences, beyond just what the sensory weaving stuff does, I also think that is going to vastly change in the future too – just think of how often we find new intelligent capabilities in animals that we previously only attributed to humans. Things like names, familial relations, shape pattern memory, delayed choice and more. There are vast amounts of information and communication over patterns we simply cannot decode fully, or communicate with completely, and are starting to get closer to communication to animals with AI.
    Now couple that with trees also having familial relations, nutrient sharing and communications of an ecosystem between all of them, both over roots and through the air, and solving novel problems via those communications Such as summoning hornets to target caterpillars eating leaves, or making leaves poisonous to giraffes forcing them to eat downwind from other trees, or knowing to make bitter chemicals if detecting deer saliva vs humans when the plant is broken and sends signaling to start healing the area.

    There is so much beyond our limited cognitive light cones, we are only just starting to dive into the surface of understanding the complexity of. How might a tree think of us, lifespans far to short to have any meaningful structure built, to short a time to ponder anything of true value! (semi-jokingly, not to suggest that type of thinking to them)

  16. Fibra Avatar

    I presume this stems from the fact that we barely understand what life is. We barely understand organisms. No matter how much “success” is claimed over progress in the bio-sciences, we have made very little progress in understand “What is Life?” as per Schrödinger. There are some schools of thought, including the one you mention (organicist), others relying on process philosophy, and a myriad of other ones, but there’s still a lack of precision, and I presume it is because a proper “definition” for what organisms are is hard to come by, as life as proper is actually very illusive. We “understand” it, once we see it, but it can presumably only be formalized in a relational manner. Rosen, Varela, and others, have proposed concepts like autopoiesis, organizational closure, closure of constraints, etc. A few other people (https://doi.org/10.1016/j.jtbi.2015.02.029) are trying to make a reasonable assessment of what organisms are through process oriented thinking, however individuating processes (as we do with things or substances) is much harder, if such can or even should be done. I still think the concept that better captures what organisms are is autopoiesis as developed by Maturana and Varela. Through this lens, any type of new adaptation or behaviour taken by organisms can be seen as a compensation to a pertubation. A good example of this, is the prominent one that is given for self-replication or division of a single membrane into two, in order to manage osmotic pressure, or otherwise another disruptor. I do think autopoiesis as a concept would benefit from some fusion with process thinking (if the previously mentioned problems of individuating processes can be solved). Through this an organism would be seen as a master-process which is composed by sub-processes. These sub-processes and the relations between them evolve over time in a manner so as to maintain the organizational closure of the master-process. This doesn’t imply conservation of shape, function, often the contrary. The only type of conservation that we would see, would be that of organizational closure. As we scale, we could take the hierarchical view, and see a few cells as sub-processes, given that these and the relations between them would evolve so as to maintain organizational closure of the master-process (i.e. organ, etc). Evolution as a whole could be seen as guiding the conservation of the organizational closure of the biggest master-process. I believe you are doing important work, and that the hierarchical view of constraining system goals over scales is very intuitive. Although, I think one needs to play both sides, that is have a shift between the process-organicist view and the computationalist-substance based one. I presume the answer to what organisms are is in the middle. I think life is constituent-agnostic and that what matters most are the relations between such constituents, so I do think your approach to Mind Everywhere could be seen in a correct light. However, I do think for one to make progress here, a first (perhaps naive) assumption has to be made, even if it might not be correct: There’s a fundamental difference between organisms and other dynamical systems. After that, perhaps one can find the general case, and either assume organisms behave at critical behaviour (for example with regards to the conservation of organizational closure). Afterall, as far as we know, life (the process) emerged once and it never stopped. I think the most intuitive property about organisms is that something is being conserved. And it’s not matter, as they are open thermodynamically. One could almost describe them as isolated organizationally, or atleast what they strive to do.

    Apologies for the rant,
    Best regards.

  17. Z Avatar
    Z

    1. This was really great! Loved the courage, the mindset, the arguments, the evidence, and your overall stance.
    2. I think it takes guts to promote ethics, spirituality and love! I commend you for putting this together since it’s not a bullet point of your research program. Talking to trolls and naysayers is never fun, but a necessary public utility.
    3. If you want to go fast, go alone. If you want to go far, go together.
    4. Minor typo – >>> I’m not saying it’s impossible to formulate such a model; indeed, one of my closest collaborators has a really ***got*** shot at that.<<<
    5. Love all the lovely comments 🙂
    6. What I got out of this – while it’s silly / plain stupid to believe everything people claim – it is incumbent on to learn more about the inner workings/lives of ourselves and other beings – when we learn, we can have the best ethical software/policies/protocols/rituals/vibes – and more freely enjoy the diverse beauty this universe has to offer.

  18. […] the land and domesticating animals thousands of years ago. As biologist Michael Levin says in a wonderful article, “hybridization of life with technology is scary when you can’t quite shake the childhood […]

  19. Chad Kovac Avatar

    The caterpillar goes to sleep and wakes up a butterfly which remembers being a caterpillar.

    In between states, it becomes a liquid.

    I wonder if the caterpillar knows it will become a butterfly or if the caterpillar thinks it is dying?

    1. Mike Levin Avatar
      Mike Levin

      and, what kind of message would the mother butterfly have had to leave for its caterpillar babies to make them feel better about that transformation?

  20. […] challenges conventional wisdom, as seen in Forms of life, forms of mind | Dr. Michael Levin | An organicist talks about AI (not really about AI…, opening new research […]

  21. […] or an organic accident?… As biologist Michael Levin rightly points out, modern humans are not an “an ideal, crafted, chosen form.” All living beings are the product of evolution, which is never perfect—just functional. And we […]

Leave a Reply to Mike Levin Cancel reply

Your email address will not be published. Required fields are marked *