The following is another set of questions (in no particular order or organization) that I’ve been asked after talks, in emails, on Twitter, etc. and my attempts to answer (some of the most common are here). I saved these because they are either interesting or because they come up a lot (or both).
Q: isn’t the application of cognitive terms (memory, decision-making, etc.) to something other than brainy animal behavior a kind of inflationary use of those terms, a category error?
A: I do a much more careful argument of this issue here and here. First, cards on the table: I claim that the only judge of how correct an idea/approach/category is whether it helps discovery, drives advances, enables novel capabilities. I work in synthetic morphology, regenerative medicine, bioengineering, and embryogenesis & deep evolution of cognitive functions. What hasn’t helped us are philosophical pre-commitments to crisp categories and their borders – armchair definitions. What does (apparently) help us is taking concepts from other fields and seeing if they move our work forward. So far, we’ve been able to steal almost everything from neuroscience – optogenetics, specific drugs, ion channel plasmids, neurotransmitter pathways, active inference, perceptual bistability, visual illusions, neural network dynamics, etc. – and use it to make new discoveries. It turns out, as a matter of empirical testing, that frameworks related to memory (even counterfactual memories – so-called mental time travel), learning, decision-making, etc. are really helpful for understanding the problem-solving capacities of cellular swarms, and exploiting training and communications protocols (as one would in behavioral science) to get to new applications in birth defects, regeneration, and cancer. This is why I claim that neuroscience is not just about neurons, and that body cells are literally a collective intelligence which navigates (anatomical) problem space during embryogenesis, cancer suppression, regeneration, and remodeling. Actually it should be no surprise, because this ancient system is what nervous systems evolved from. My view of whether this is a category error or not is strictly based on whether applying these categories to my chosen substrate gives us empirical success. Everyone else can decide for themselves, but if one doesn’t like that criterion, one should specify what their favorite alternative is – if there’s a criterion that is better than prediction, control, and discovery of new research roadmaps, I’m happy to listen. If one doesn’t favor the appropriation of terms from cognitive science to unconventional substrates, they have to show why that kind of gatekeeping is making science better than the alternative. So, to sum up. I am not saying that the basal cognition of cellular swarms navigating anatomical, physiological, or transcriptional state spaces is of the same magnitude as human cognition; for example, I’ve seen no evidence of planning, or advanced linguistic syntax, in our contexts. I am saying that the data show that molecular mechanisms and algorithms used by neural networks to support advanced cognition are ancient, and serve similar functions in networks made of other types of cells which navigate other problem spaces than familiar 3D space of behavior and were pivoted by evolution into familiar and much more obvious brainy capabilities. So there is a spectrum, and many cognitive terms usefully apply much broader than their orthodox domains.
Q: You deal with some out-of-the-mainstream ideas, some of which clash with conventional paradigms in evolutionary theory, molecular biology, etc. Does that mean you’re into creationism, mysterianism, etc.?
A: I am fairly open-minded to new ideas, but as far as my science goes, there’s one rule: they have to have the potential to lead to progress. Non-naturalistic ways of thinking (ranging from mysterianism – “we’ll never know”, to creationism – “a super-intelligent being beyond our comprehension did it”), have the major defect that they are sterile – they suggest no research agenda. They stop the flow of questions and empirical answers, they don’t facilitate progress. Thus, I am not interested in engaging with any line of thinking that does not have the potential to drive our understanding forward, in terms of directly leading to new lines of experiments to do. Within that requirement, I’m up for considering revisions to any and every cherished assumption; I’m not tied to current dogma in physics, biology, or any other field – I love weird ideas, for their disruptive potential to open up new vistas of investigation. I also understand well the limits of scientism in one’s life. But as I tell people who approach me with all kinds of big ideas and “challenges to the paradigm”, the first question is: what new experiments, capabilities, and research can be reached by applying this way of thinking? What experiments does it suggest that existing views didn’t facilitate? What has that viewpoint done for closing a knowledge or capability gap? That’s the #1 deficiency in a lot of the really creative stuff people email me about – no obvious bridge to impact. I’m not interested in just critiquing existing views; I want paths forward – ways of thinking that tell us where to look next and how.
Q: What’s with the Centaur on the front page of your academic site?
I love this painting, The Neurologist by Jose Perez. I’ve modified the original a bit, but overall, what it transmits to me are the ideas of 1) scientists working together to rigorously investigate something unconventional and previously unrecognized, and 2) best of all – he’s holding them up – he wants to help. This feeling, that discoveries want to be made and in some way will help the seeker of knowledge, resonates with me. I try to run a lab that honors both of these principles.
Q: In terms of LLMs and AI (current ones, not hybrids of technology mixed with any biological tissue), would you say these systems scale cognition/intelligence/consciousness, or are more similar to rocks and are merely simulating cognition?
A: This is tricky. On the one hand, we can no longer assume that being able to talk, in ways that humans find useful, is a signature of high agency. And my suspicion is that current architectures do not meet the requirements. In fact I started writing a paper on what is required (what biologicals have that current machines don’t) but stopped it because I didn’t want the ethical responsibility of what happens when people do build “machines” with those principles, which I think is totally possible. To whatever extent I’m right about what’s needed, those would be moral agents that matter and I already have 2 kids, I don’t need a trillion more to worry about.
BUT, we are also learning that just because you know the ingredients to a thing you’re making, does not mean you know everything it does. In some ways, our progress in the information sciences and AI has had the flavor of “competence without comprehension” (in Dan Dennett’s phrasing) – just the way we give rise to human embryos – super competent at it for eons, with 0 comprehension of how the material really works until very recently (if at all). So I think we should stay humble and be prepared to find out that these LLMs in fact do have some surprising emergent aspects of true agency, I don’t think we can rule it out based on how they seem to work, or based on on our preconceptions on what “machines”, neural network architectures, etc. ought to be able to do.
Q: Why don’t you address topic X (aliens, cosmic consciousness, etc. etc.)?
A: People email me with all kinds of counter-paradigm topics – alternative medicine, other worlds, religious experience, and so on. My strategy for what I talk about is simple. I only say things in public that I think are helpful to others – things on which they have any justification for listening to my thoughts. My opinion on random topics, in which I have no expertise or unique data, is not helpful to the general public and doesn’t add anything. My personal opinions about what might or might not be true are quite separate from things I say in public, which are typically only things I think we have developed very strong evidence for and things that other people can benefit from hearing me say (which does not include the many unprovable opinions that friends and family get the dubious benefit of hearing me spout off on). But for general audiences, I mostly only write and talk about things that I think are strongly implied by our data, or unique interpretations that I think can be helpful to others because there’s some chance that I may say something that enriches past what others have already said. Overall, I currently say about ~70% of what I believe. I don’t say anything I don’t believe, but I definitely also don’t talk about all of the things I think are true, because I only want to share things that I think you too should become convinced of, and getting evidence good enough for that is a slow, painful process. That percentage has risen over the decades, and we’ll see how far I get.
Q: What about Rupert Sheldrake’s work? Doesn’t your work on bioelectrics prove him right?
A: I know Rupert; I think he’s interesting, and his hypothesis – which to me sounds like “Hebbian learning by the Universal mind” – is also interesting. I am glad that Rupert is out there producing these ideas and thinking about experiments that could support what in effect would be a revision of materialist physics as we know it. I’ve read most of his books, especially the earlier ones and am glad I did. Do I need his hypothesis to explain anything we’ve done in the lab? For now the answer is no, which is why I haven’t gotten involved in any public discussions about his ideas. I have nothing significant to add to what he’s already (eloquently) said, as our work neither requires his proposed effects nor sheds light on how it might work. I have a specific research path which keeps me very busy, and no unconventional physics is necessary for it, so there’s no useful reason for me to get involved in this topic right now. At one point, on a podcast, someone said that I don’t talk about his work because I’m afraid of being labeled a heretic. That’s not it; that ship has sailed, and I’m fine being a heretic on things I have new, strong backing for. There are also a thousand other topics I find interesting but not needed for what I am trying to develop and I don’t have the time to get deep into all the other controversies. I am glad Rupert’s ideas are out there; at some point they may connect in a strong way to what we’re doing, and if they do, I will have no problem diving in. For now, it hasn’t been necessary and no helpful connection between our results and his idea has come forth yet.
Featured image by DALL*E3.

Leave a Reply to Alex Warren Cancel reply