I’m often asked about my views on artificial intelligence (AI); recently I released my first public thoughts on it, in the bigger context of diverse intelligence; the short (more general-purpose) versions are here and here, and the full (academic) paper preprint is here. As often happens, it drew some interesting responses. I think these shed important light onto the field and where it needs to go, although below I speak only for myself, not necessarily any of my colleagues.
I consider myself to be firmly in the organicist tradition – emphasizing the causal power of higher levels of organization and cognitive perspectives, over molecular mechanisms, as ways to understand and relate to complex systems. Often my Mind Everywhere views attract critiques from colleagues operating through the molecular biology lens, who believe that it is a dangerous category error to entertain the idea that molecular pathways, cells, and tissues could have true goals, intelligence, and an inner perspective. Even the ones who are strictly not reductionists generally believe that what is emergent from non-brainy systems are new levels of complexity or unpredictability, not elements of (even primitive) minds.
My framework seeks to reveal more minds, not less, in all their glory – with degrees of inner perspective, valence, freedom. One might think that the organicist camp, or holistic thinkers more broadly, would be happy with attempts to show how minds can emerge in seemingly mindless media, and especially with frameworks that illustrates how the organicist perspective can drive new discoveries and practical capabilities. But things are not so simple. After this piece, my email and Twitter DM’s contained even more outrage than my mind-focused papers get from mainstream molecular biologists. People felt that including engineered and synthetic constructs on the spectrum of true cognition with us was a major mistake, though no one offered a principled, convincing model of how to keep “artificial” beings out of the exclusive club we enjoy. I’m not saying it’s impossible to formulate such a model; indeed, one of my closest collaborators has a really good shot at that. But none of the responses to these ideas contained even an attempt at it – people were sure they knew a real being when they saw one, and felt very strongly this line had to be maintained whether there was a way to justify it or not. Below, I use this as an opportunity to emphasize some key points of my view, and speculate wildly (given that I have no training in human psychology) on some possible drivers of this visceral response. Of course, this is not really about the response to my views particularly – I’m just one of a number of people trying to make progress in this field, and this push-back applies to many of us who do not immediately see a way to draw sharp lines before we understand how our own magical cognition is embodied.
To summarize, I think the immediate push-back is driven by fear and insecurity – a subconscious recognition that we do not understand ourselves, and that AI is just the tip of the iceberg of deep questions that when brought to the forefront, will crack our superficial, comforting stories about why we are real and important. I think that at root is the fear that there is just not enough love to go around – a scarcity mindset with respect to compassion and concern1. I think this position can be summarized as “only love your own kind”.
In my piece, I situated AI within the broader framework of Diverse Intelligence. I tried to point out that our deep questions are not about today’s software language model architectures, but the much bigger unknowns about how to define and recognize the terms everyone throws about with abandon – minds, understanding, goals, intelligence, moral consideration, etc. It’s pretty obvious that whatever the limitations of today’s biotechnology and artificial life, their functional aspects will increase exponentially and cover all of the things that used to be unique to life (especially by hybridizing and altering naturally-evolved biological material with synthetic components). I pointed out that the space of possible beings (including cells, embryos, chimeras and hybrids of mixed biological and technological provenance, hybrots, cyborgs, alien life, etc.) is vast, and that we don’t have a sure footing for navigating our relationships with systems that cannot be classified according to stale, brittle categories of “life vs. machine” that sufficed in pre-scientific ages. I was very explicit that I was not making any claims about today’s AI, but mostly that I don’t think we can make any claims at all yet because no one has good, actionable definitions of what the secret sauce is that many feel they have but that our creations cannot share in any degree.
Most crucially, my piece was not about AI – it was about beings who are not like us, and about the relevant universal problems that were here long before AI was even discussed. Being as clear as I was about this, I take the resistance to not be about AI either. It was a general resistance to the Diverse Intelligence project writ large.
One common theme in the replies was the narrative that this way of thinking was the result of an unbalanced development – a psychological deficiency. Only a tech nerd who knew nothing outside the laboratory and machines could dare speak of a continuum of mind that contains both bona fide humans and such unconventional agents as engineered beings. Anyone entertaining such ideas couldn’t possibly understand the ineffable magic of real human relationships and the strong feelings and emotions that “real” beings have. No one offered a guess as to what the magic ingredient might be, or why the meanderings of the evolutionary process would have a monopoly on creating such. But they used a familiar trick for resisting new ideas: painting their adherents as deficient – “they don’t feel the magic like we do, that’s why they say those crazy things.” This way of holding on to old ideas, in the face of challenges that require thought and convincing argument, is ancient. It is comforting and easy to retreat behind the feeling that you directly perceive the truth which escapes the others because they’re just not as developed as you.
There is the curious phenomenon in which people with a specific issue tend to see it everywhere and paint it onto others. I think that seeing workers in this field as incomplete is, ironically, just a mirror of some people’s inability to visualize what it’s like to be someone who is not like them in every way. It’s a kind of lack of imagination and empathy. I suspect that the outrage (at seeking commonalities between highly diverse intelligent systems) is often driven by an innate feeling of incompleteness – a worry that their own development will not have been complete enough to embrace the future. This causes them to misunderstand the scientific and ethical goals of many of us in the field of Diverse Intelligence. It’s scary to see empirical testing of philosophical commitments, because one might be put in the uncomfortable position of having to give up ideas that one cannot convincingly defend.
For this reason, a key risk of testing philosophical ideas against the real world (i.e., engineering) is that people rush to see it as elevation of tech over humanity. This occurs no matter how much one talks about the meaning crisis, the importance of broadening our capacity for love, and the centrality of compassion – profoundly human issues that are very opposite of a technology-worship. Here’s how I define engineering:
I view engineering in a broader sense of taking actions in physical, social, and other spaces and finding the richest ways to relate to everything from simple machines to persons. The cycle I like is: philosophize, engineer, and then turn that crank again and again as you modify both aspects to work together better and facilitate new discoveries and a more meaningful experience. Moreover, the “engineer” part isn’t just 3rd person engineering of an external system. I’m also talking about 1st person engineering of yourself (change your perspectives/frames, augment, commit to enlarging your cognitive light cone of compassion and care, etc.) – the ultimate expression of freedom is to modify how you respond and act in the future by exerting deliberate, consistent effort to change yourself.
So here I clarify my personal position. The goal of my work is fundamentally ethical and spiritual, not technological. I want us to learn to relieve biomedical suffering so that everyone can focus on their potential and their development – to enlarge their cognitive light cone, which is so hard to do when one is limited by the developmental consequences of some random cosmic ray strike into their cells during embryogenesis, or some accidental injury which leaves them in daily pain. It is also to raise compassion beyond the limits set by our innate firmware that so readily emphasizes in-group and out-group. We can start by learning to recognize unconventional minds in biology, and move on from there. That’s what I’m focused on now, which is why biomedical engineering is such a big part of the discussion – so that people understand how practical and important it is. But of course the bigger implications are about personal and social growth.
The goal of TAME is not just “prediction and control”. That’s what it looks like for the left side of the spectrum minds, and that’s how it has to be phrased to make it clear to biologists and bioengineers that the talk of basal cognition is not philosophical fluff but an actionable, functional, enabling perspective that moves science and medicine forward. But the same ideas work on the right side of the spectrum, where the emphasis shifts to a rich, bi-directional relationship in which we open ourselves to be vulnerable to the other, benefiting from their agency. What is common to both is a commitment to pragmatism, and to shaping one’s perspective based on how well it’s working out for you and for those with whom you interact – in the laboratory or in the arena of personal, social, and spiritual life. Why is this so hard to see – why do efforts at working out a defensible way of seeing other minds get interpreted as anti-humanist betrayal toward technology?
In the end, I think it boils down to feeling threatened – to buying in to the idea of a zero-sum-game with respect to intelligence and self-worth: “my intelligence isn’t worth as much if too many others might have it too”. I doubt anyone consciously has this train of thought, but this is what I think underlies those kinds of responses to pieces on Diverse Intelligence. Feeling not only that love is limited and one might not get as much if too many others are also loved, but also feeling that one may simply not have enough compassion to give if too many others are shown to be worthy of it. Don’t worry; you can still be “a real boy” even if many others are too.
I think it would be worthwhile to think about how we could raise kids who did not have this scarcity mindset. What kind of childhood would make us feel that we didn’t have to erect superficial barriers between our magic selves and others who don’t look like us or who have a different origin story? What kind of education could be implemented, to convince people that the question of who might have emergent minds is a deep, difficult, empirical question, not one to be settled based on feelings and pre-commitments?
The reductive eliminativists, while wrong and impoverished, are at least egalitarian and fair. The “love only your own kind” wing of the organicist and humanist communities, who talk glibly of “what machines can never be”, are worse because they paint indefensible lines in the sand that can be used by the public to support terrible ethical implications (as such “they are not like us” views always have, since time immemorial). A self-protective reaction leads people to read about calls to expand the cone of compassion in a rational way, but only hear “machines over people, pushed by tech-bros who don’t understand the beauty of real relationships”. Other, unconventional minds are scary, if you are not sure of your own – its reality, its quality, and its ability to offer value in ways that don’t depend on limiting others. Having to love beings who are not just like you is scary, if you think there’s not enough love to go around. Letting people have freedom of embodiment – radical ability to live in whatever kind of body you want, not the kind chosen for you by random chance – is scary when one’s brittle categories demand everyone to settle into clean, ancient labels. Hybridization of life with technology is scary when you can’t quite shake the childhood belief that current humans are somehow an ideal, crafted, chosen form (including the lower back pain, susceptibility to infections and degenerative brain disease, astigmatism, limited life span and IQ, etc.).
It’s terrifying to consider how people will free themselves, mentally and physically, once we really let go of the pre-scientific notion that any benevolent intelligence planned us to live in the miserable state of embodiment many on Earth face today. Expanding our scientific wisdom and our moral compassion will give everyone the tools to have the embodiment they want. The people of that phase of human development will be hard to control. Is that the scariest part? Or is it the fact that they will challenge all of us to raise our game, to go beyond coasting on our defaults, by showing us what is possible? One can hide all of these fears under facades of protecting real honest-to-goodness humans and their relationships, but I think it’s transparent and it won’t hold.
Everything – not just technology, but also ethics – will change, when we confront the deep questions of what makes us real and important, and who else might be there with us. So my challenge to all of us is this. Paint the future you want to see, dropping the shackles of the past. Transcend scarcity and the focus on redistribution of limited resources, and focus on growing the pot. It’s not for you – it’s for your children, and for future generations.
- I want to be clear here that I don’t mean this to apply to everyone. There are of course others in the field, including close colleagues, who are working on complex, nuanced, defensible, and useful views of the difference between possible engineered agents and naturally evolved ones. Those few are not what this is about. ↩︎
Featured image by Midjourney.

Leave a Reply to Pamela Lyon Cancel reply