On biology and computation: a dialogue between Chris Fields and Tyler Clark

Published by

on

Here is an email thread of a dialogue (posted with permission) between Chris Fields and Tyler Clark (and a bit me, and a bit Karl Friston), on the topic of life and computation.

CF = Chris Fields, working at the intersection of physics, computer science, and biology.  More information and recent publications are available at https://chrisfieldsresearch.com/ .

TC = Tyler Clark, an MD with a background in developmental biology and cancer research and an interest in understanding the nature of information processing in living systems generally. “I am unsure whether such a general description exists but I am deeply curious whether it does.”

KF = Karl Friston, a neuroscientist and polymath with an amazingly rich body of work. See more at https://en.wikipedia.org/wiki/Karl_J._Friston.

ML = Michael Levin, working at the intersection of computer science, biology, and cognition. See more at https://drmichaellevin.org.

The Dialogue:

TC:  Yesterday (2/2/24), I presented these slides to the Computational Phenomenology group run by Maxwell Ramstead, which Karl kindly attended. Karl had very interesting things to say about my ideas and how to improve them. In short, my perspective is that Turing computation, while universal, will increasingly run into computational barriers around efficiency when accurately modeling living systems in the future. Thus, we need new hardware to model (and, therefore, new theory to describe) how living systems handle information more generally. This will help us understand biological and computational systems more thoroughly. I may be wrong about this but it seems others agree the discussion is worth having. My ideas, if published today, would constitute something of a conceptual position paper, only intended to provoke thought and discussion, nothing formal or definitive. Thus, I’ve linked the discussion video and attached the slide deck. (Mike, you’ve already seen and commented on an earlier draft of the slide deck). Here is the list of literature subsequently recommended by Karl on the topic:

https://arxiv.org/abs/1906.10184

https://doi.org/10.3390/e23091220

http://www.scholarpedia.org/article/Oscillation_death

https://arxiv.org/abs/2112.15242

https://doi.org/10.1038/nrn2787

https://arxiv.org/abs/2212.12538

https://arxiv.org/abs/2311.09589

https://doi.org/10.3389/fpsyg.2019.02688

https://doi.org/10.1016/j.biosystems.2022.104718

CF:  You are pointing out some valuable things here, many of which are very broadly accepted.  The first phase of AI research, “good old-fashioned AI” (GOFAI) attempted to model human cognition as rule-based symbol manipulation.  It was a dismal failure, as was widely recognized by the mid 1980s.  Current AI systems are almost all based on some form of real-valued computation (in a finite approximation) as implemented, e.g. by artificial neural networks (ANNs).   However, ANNs are, with few exceptions that involve specialized chip designs, themselves implemented on standard von Neumann computers, i.e. computers that separate processing from memory.  ANNs and many other kinds of programs start by implementing some function (or set of functions) and after interacting with their environments for a while, end up implementing some completely different function or set of functions.  This is a general way of describing “learning” though as you point out, it also describes “death” and any other change in information-processing behavior.  The “explanation problem” in AI refers to our current inability to figure out what programs are doing after they have learned something from their environments. 

We do not know, for example, how AlphaFold predicts protein structures, we just know (by doing experiments on proteins) that it predicts them better than any other methods we’ve tried.  None of this has anything to do with “logical closure” since the environment is (for all practical purposes) not logically closed as a source of information.  The only real limitation on these systems is input bandwidth, which is a finite resource just like time, space, or energy. If I am reading your slides correctly, you appear to be using “Turing computation” to refer to both computation implemented on Turing-machine (TM) like architectures and the theory of computation, or computability, that uses TMs as a foundational model.  Any particular architecture is a model of computation; the theory of computability is about what functions can be computed by finite systems, i.e. by systems that employ only finite resources.  All known architecures, including ANNs, neuromorphic chip designs, reservoir computers (arbitrary physical systems trained with reinforcement learning), and even quantum computers are Turing equivalent, i.e. have exactly the same computational power as a TM.  Quantum computers are more efficient than classical computers, but compute no more functions.  (A claim was made several years ago that problems not solvable by TMs could be solved “to high probability” by a particular QC configuration.  As far as I know, this claim has never made it to formal publication.  I believe one of its assumptions is not physically realizable, i.e. not mathematically consistent within QT).

On slide 4, you characterize living systems as “neither finite nor discrete.”  I disagree.  Living systems do not employ infinite resources, e.g. infinite time, space, or energy.  Living systems have only finite I/O bandwidth with their environments (they have finite Markov blankets).  Though we often use continuous mathematics (e.g. dynamical systems theory) to characterize living systems, their dynamics are discrete at every level we can probe experimentally; indeed, all physical systems are discrete in this sense.  If QT is correct, spacetime itself is discrete. A good reference for applying the general theory of computation to arbitrary systems is here.  As emphasized in this paper, saying that something is a “computer” is giving its behavior a semantic interpretation based on some finite set of finite-resolution measurements.  Hence there is a deep sense in which the theory of computation is really about the possibility of constructing self-consistent semantic interpretations.  This is also what we are doing in biology. Hence I agree with you that GOFAI-style symbolic architectures are not very useful for describing living systems (or much else), but do not agree that we need a new theory of computation/computability.  The theory of computability in fact describes (as far as I can tell) the behavior of generic quantum systems, of which (assuming QT is true) living systems are examples.

I do not understand what you are saying about partial functions.  Any partial function can be re-expressed as a function, and any function can be re-expressed as a partial function, so there is no significant mathematical distinction and hence no significant physical distinction between them.

TC: Thank you for your sharp insight. My complaint about von Neumann architectures, or Turing-like computation, is not with its universality (its ability to compute any computable function) but with its style. Thus, just as quantum computation is more efficient than von Neumann architectures for certain applications, biological computation is more efficient than Turing computation for certain applications. The example I used in the talk was that the human brain runs on 20 watts, whereas ChatGPT still can’t make a cup of coffee. In the discussion, one of the researchers noted the Turing equivalence of biological computation by virtue of recursion and I remember wondering how much memory it would take to achieve that depth of recursion. Indeed, even in climate science, the Navier-Stokes equations take enormous computational power if taken head-on with systems like climate. Living systems are employing a vastly different form of computation than the von-Neumann architecture and, perhaps I could’ve pointed out these clarifications more in the slides.

My mathematical example may fall short in technical terms but what I was attempting to describe was the contextuality inherent in every level of biological function. For example, if we imagine the environment as the input domain, the rods and cones in the retina respond to certain frequencies of light and not others, resulting in a partial function. The retinal ganglion cells in the retina respond to specific visual patterns and not others, resulting in a partial function. The optic radiations in the brain respond to specific portions of the visual field and not others, resulting in a partial function. This hierarchical contextuality underlies how living systems perceive and respond to environmental stimuli. Perhaps, then, this is biology after all. Karl rightly pointed out that I was seeking a teleology. In the sense that all living systems are entropy-reducing, they are handing physical information in a way that modern technology cannot replicate without theoretical extension. 

CF: Thanks for these clarifications.  Most biologists and AI people would probably agree with you about architecture.  I expect that living systems are using quantum computations based on their energy consumption, as described in the “quantum cells” paper. Contextuality is a critical issue for which there are now a number of formal characterizations; the “contextual” paper reviews several.  The operators used in these approaches are each executing partial functions of the whole input stream as you point out; each executes a function on its domain, or sample of the environment.  The key issue is how these functions relate to each other.  This is an active area of research in physics, cognitive science, AI, etc. with a lot of relevant experimental data.

TC: In that case, the question then becomes whether, just as classical physics models classical systems, there is some description of biological computation in the classical limit that can be modeled efficiently, given the appropriate theoretical treatment.

CF: We are biological computers, and we construct models of other living systems (including each other) very efficiently.  Our models are good enough that we’ve survived so far, but we’re also constantly reminded of just how bad they are.  “Good, fast, cheap, pick two” very much applies, and organisms need models that are fast and cheap. From this perspective, maybe it’s not such a bad thing that von Neumann machines and even ANNs have architectures very different from ours.  They force us to look at things differently, and help us to notice the vast array of things that we don’t understand.

TC: Very much agree on your nuanced view of other computational models. In fact, that is exactly what interests me about this problem. Let me ask the same question in another way: If you were to envision biology in the absence of any terrestrial knowledge of living systems (cells, bioelectricity, etc.), how would you go about defining these systems? I think they are informational, as they relate to entropy reduction. Also, there must be some means by which to handle information in a particular way, such that the output is (locally) entropy reducing. If we are to consider exobiological or artificial living systems (indeed, other computational models) by what common information handling properties do these living systems relate to each other? If we are ever to build artificial living systems, does this not imply a deeper understanding that transcends Earth-based descriptions and seeks a more universal understanding of living systems?

CF:  Consider the FEP [ Free Energy Principle ].  It says that persistent systems will behave in a way that maintains the integrity of their boundaries (Markov blankets).  We could also say: persistent systems will behave in a way that keeps their interactions with their environments strong enough to provide the thermodynamic free energy needed to maintain the integrity of their boundaries, but weak enough to prevent their environments from tearing their boundaries apart.  They do this by predicting what their environments will do to them next, and then acting to either exploit or thwart their environments’ next actions.  If they get a prediction too badly wrong, or can’t act effectively to exploit or thwart, they stop persisting, i.e. that particular system – environment distinction goes away.  Persisting is a form of entropy reduction: any persistent boundary divides the world into this and that, massively reducing the total number of allowed world states. The FEP doesn’t pick out living systems; it applies to everything.  So we’re forced to come up with some additional criteria of cleverness, persistence time, etc to say what a “living system” is.  This paper argues that “life” refers to a particular lineage here on Earth – that “life” refers to a unique object (a bunch of cells) with a particular history.   A system could be very clever, but not last long because its cleverness requires more thermodynamic free energy than its local environment can provide.  A system could also be very clever, yet be ripped apart by some environmental fluctuation it couldn’t cope with.  Either of these could happen to us, or to life on Earth; indeed one or the other eventually is a near certainty. How much persistence do we need to call something “living” or even “interesting”?  New forms of life, nothing like ours, may be arising on Earth all the time, but not lasting long enough for us to notice.  We may have had alternative lineages on Earth that lasted a billion years, but they could easily have left no signs we can read. There could also be very clever systems that did more quantum computation and hence needed much less thermo free energy that we do.  We may never see these, since they would leave only a very weak thermal signal in their environments. I expect there’s “life” all over – indeed I don’t draw much of a life/non-life distinction.  I’m not convinced that there’s any principled theory for what we’d intuitively call “living.”

KF: You should publish this exchange as a conversation (much like the introductory chapter in Sir Arthur Eddington’s “Space-time and Gravitation“).

TC: This resonates deeply with my own thinking. I suspect the universe may be teeming with life and we haven’t a broad, inclusive enough definition to recognize or understand it.  Certainly, in this sense, our intuition says something about life, however. You even called it a persistence. This persistence can begin and it can end, being defined by some temporal boundary. This persistence must also be defined by some spatial boundary, a Markov blanket that separates the persistence from its environment and massively reduces the number of accessible states, reducing entropy. This persistence must be moderately coupled with its environment for boundary maintenance. In all reality, I think you just outlined excellent principles for what might be considered “living.” The question is whether, under such a broad understanding, principles of information processing can be derived that may, like your ‘principles’ above, be necessarily general to all forms of persistence and, therefore, illuminating of other computational models.

CF: But persistence in this sense is persistence in the FEP sense, and as noted, the FEP applies to everything.  Electrons, atoms, rocks, cells, people, organized civilizations, etc.  Take away that assumption that the MB is a spatial boundary, and it applies to generic quantum systems.  It applies to Boltzmann brains and black holes.  So this idea summarizes some intuitions about life, but doesn’t pick out life in any intuitive sense. Indeed in my view, this is the great beauty of the FEP: it started as a characterization of brain-implemented cognitive systems, and ended up being a Bayesian theory of everything.

TC:  I am still confused by the apparent notion that, because the FEP applies to everything, the boundary between life and death, living and non-living dissolves. Does every “thing” just become another type of “thing” ad infinitum?

ML:         For me, the operant question with all of these terms are, what do you expect them to do.  Binary categories rarely work well, I think.  Here’s a bit on “death”:   https://thoughtforms.life/life-after-death-in-another-world-at-another-scale/ . I think what we call “life” are things that are good at scaling up the proto-cognitive competencies of their parts. It’s a soft category (although Sara Walker would likely disagree). Do we really need a “boundary” between the living or non-living or do we need tools for optimal interactions with various kinds of systems depending on their cognitive light cone?  What would a clear boundary do for us, in terms of enabling further discovery?

CF: I like your pragmatic view of the “what is life?” question.  I suspect that as we look harder, we will find yet more edge cases like viruses, viroids, or the more recent “obelisks” in ours guts.  Craig Venter’s sequencing voyage found that the oceans are full of novel virus-like genome fragments.  Who knows what kinds of “life cycles” these things have. It seems astonishing that a single lineage could propagate for ~4 billion years, 30% or so of the lifetime of the known universe, but maybe that’s no so rare, and there are all kinds of long-lived lineages.  If there are, I bet there are lots of short-lived ones, too, that eventually decay back to free molecules that can then re-assemble in some new form later.  The universe seems to be filled with organic stuff, which isn’t surprising given stellar evolution models.  We should probably get used to the idea that an “organism” could be a highly spatially distributed thing with parts that only directly interact occasionally.  

TC: For both pragmatic and epistemic reasons, I think I mostly agree with the view of a blurred line between living and non-living matter. However, I believe this sets up a spectrum. Prions or viruses at one end of this spectrum have low complexity, low persistence, low “scaling up the proto-cognitive competencies of their parts”. At the other end of the (known) spectrum, you have organisms like humans which have much greater complexity, persistence (so far), and scaling up of the proto-cognitive competencies of their parts. Indeed, in these low complexity systems, the line between living and non-living does become blurry and seems arbitrary, as a “living” virus looks and acts much the same as a “dead” virus. However, at the human level of complexity, something is clearly lost at death, as a living human is very different from a dead human.

In Chris’ words, “We are biological computers.” Thus, there is a sense that this boundary between life and death is blurry only in systems with low complexity. The distinction and, therefore, the boundary becomes clearer as the complexity of living systems increases. This allows us to ask epistemic and pragmatic questions about the difference between processes on either side of that boundary. By understanding the nature of those differences, we can try to preserve, extend, and improve life, as we do in medicine. Indeed, the whole field of biology studies these processes, as distinct from other physical processes. By removing that boundary, all of biology simply becomes physics. Finally, by understanding the computational nature of the processes that underlie living systems, we can ask questions about how alternative, artificial computational models can be instantiated that achieve a similar (or greater) level of complexity, persistence, and scaling. 

ML: I agree that there’s a spectrum, but I think it’s a spectrum of how we can best relate to the system, not a spectrum of objective qualities. Terms “living”, “machine”, etc. are only useful insofar as they pick out bundles of strategies one claims can be useful with that system. I detail it here. Does “alive” give us a useful set of tools?  It can, depending on what you want to do (for example, signal that you expect to be able to use the concepts of evolution on it). I suspect the spectrum of cognition more than the spectrum of “alive” gives us the tools we will need for the most interesting interactions, but I do agree that in some contexts/use cases, the living/nonliving spectrum could be helpful too.

TC: Indeed, I recently wrote to another researcher, “I think that any description of intelligence must arise evolutionarily from more general properties of information processing at more basal levels.” This is an area that fascinates me. Also, as an MD, thinking of all the ways that something is alive and, therefore could become dead is highly useful in preventing morbidity and mortality. So, especially for complex living systems, it is a useful heuristic, at least. I am very keen to try and unpack this in terms of other, alternative computational models/lineages, which is to understand whether Earth-based notions of life can be stripped away in terms of some deeper description of living systems that is necessarily applicable to even exobiological and artificial living systems? We agree that there is some form of cognitive/complexity spectrum. Can we go deeper? If complex, alien life were discovered that is completely biochemically foreign to life on Earth, how would we even begin to make sense of it? If an engineer claimed the discovery of artificial life, how would we make sense of that? Especially in the context of truly alternative lineages, I think the underlying style of information processing jumps out as a key point of distinction.

            It is conceivable that what I mean by computation overlaps significantly with what you [ML] mean by cognition. I am envisioning an informational process driven by thermodynamics and other physical laws that serves to yield organizing, abstract representations of physical data about the environment for better understanding and manipulating the environment to maintain some organism-environment coupling. At a low level, these processes must deal with information gain/loss but need not do so in a way that is influenced by Earth-based descriptions. In some sense what I am searching for, by stripping away these Earth-base descriptions, is a universal description of what living systems are doing, regardless of their substrate. I think the TAME paper, by broadening the concept of cognition, is a step in this direction but I am curious if we can take some basal definition of cognition and dissect its physics and information processing to gain a deeper understanding of how these processes are defined in terms of information, thermodynamics, and physics, especially in some deeper/universal sense?

CF: Given that the FEP is precisely “an informational process driven by thermodynamics and other physical laws that serves to yield organizing, abstract representations of physical data about the environment for better understanding and manipulating the environment to maintain some organism-environment coupling”, and is defined only using fundamental physics, not any “Earth-based descriptions,” and is completely substrate-independent, what do you find unsatisfying about it?

TC:  As you noted earlier, “I’m not convinced that there’s any principled theory for what we’d intuitively call “living.” The FEP does not distinguish between living and non-living things, nor does it try to. If everything obeys the FEP, then it becomes a fundamental law like any other. Mike mentioned a form of proto-cognitive scaling, you a persistence. Perhaps, we can consider what happens when “living” things die? Indeed, macroscopic thermodynamic irreversibility is the result of microscopic logical irreversibility via Landauer’s Principle. Thus, death must be microscopically a logical process which results in the loss of information. Conversely, life must be the logical process which produces the information lost at death.

Are we really going to give up on a fundamental description of living systems, as distinct from non-living systems? Again, at low levels of complexity (a living virus vs. a dead virus), I don’t see much of a distinction. However, at increasing levels of complexity (Schrodinger’s famous cat), the distinction becomes much clearer. Understanding the fundamental differences between living and non-living systems could have significant implications for various fields, including medicine, ecology, and artificial intelligence. It could inform strategies for preserving life, understanding ecological balance, and designing artificial systems that mimic or surpass biological systems in terms of efficiency and adaptability. I find this sort of deep, theoretical inquiry to be highly relevant and impactful. Is it all just a waste of time?

ML: I don’t think it’s a waste of time at all;  it’s hugely important and impactful. I just think the principles we seek are deeper than what we normally think of as living.  I think neuroscience isn’t about neurons, fundamentally, and I think “biology” isn’t about living things, fundamentally. I know it sounds crazy, but I think understood correctly, these sciences point to deep principles that take us beyond the material they were first invented to deal with. What happens when “living” things die – it’s an excellent question. So, when we make Xenobots, the embryo from which we liberate the cells dies. Or does it? Sort of; the embryo is gone, but its cells are still alive, and in fact become a different (proto)organism.  So, do we require the individual cells to die? Cells can be fragmented into tiny blobs and they live and move around for hours. At that point they are well-described by models of cytoskeletal growth and cycling. What do we really gain in any of these systems by asking if they are alive? Maybe it refers to whether they participate in evolution, which is useful. I think we gain a lot by asking what perspective on the world any of these musters, but that seems equally applicable to a lot of “non-living” systems?  And of course there are all kinds of hybrids, hybrots, and chimeras described in the attached, which will be very hard to classify.

TC:  One of the things I find fascinating about living systems is their ability to encode a unique history. Earlier, when I described macroscopic irreversibility in the context of death, I did not mention that this irreversibility applies to living systems, as well. In other words, living systems reduce their internal entropy not only by achieving information gain about environmental data they find relevant, but also by discarding, through information loss, the data they find irrelevant. This discarding of information helps to offset the organism’s local entropy reduction and leads to macroscopic irreversibility. Thus, the organism’s history becomes macroscopically irreversible and, due to the size of the state space traversed, (probably) unique.

Indeed, the construction of internal models, aligning with the FEP, requires some form of memory about the environment (genetic, memetic, or otherwise) and this aligns with the internalization of some history of environmental states. It is through this history that living systems can model and respond to their environment in ways that are organizing/entropy-reducing compared to non-living systems. Thus, when we are talking about death, what we are talking about is the loss of some unique model/history/record of the environment w.r.t that organism.

I believe that when totipotent/pluripotent stem cells are liberated from the embryo, some patterns are lost (the unique history of the development of that embryo) and some patterns are retained (the histories/models internal to the cells that remain). Furthermore, cell fragmentation destroys the unique history of that cell, while preserving some form of memory/history in the fragments that remain. Living systems are hierarchical pattern generators. Thus, information can be stored and lost in higher order structures, while preserving the information stored in lower order structures. While these higher/lower order structures can complicate classification, I do not believe they confuse the conversation surrounding death, which is some irretrievable pattern/information loss in a given context or scale. This reversibility/irreversibility is on full display in biochemistry where, in the canonical example of glycolysis, steps 1, 3, 10 are considered irreversible and, thus, where regulation of the pathway occurs. All other steps in the pathway have small Gibbs Free Energy changes and are considered reversible.

Since irreversibility is related to information loss, I believe these irreversible biochemical steps are key in defining the unique history of the organism. Fundamentally, this is information management. All living systems are entropy reducing. So, they are handling information in ways that our technology yet cannot. I am wondering if, at its core, life is really about information and its processing? Indeed, as goals imply a context or model, I am wondering if teleonomy can be considered purely in informational terms?

In the context of information processing, I am inclined to agree with the life/non-life spectrum as a form of computation that sees informational dynamics as macrostates supervening on physical dynamics as microstates, with varying degrees of coupling strength between them. Strong circular causality between layered macrostates and microstates allows living systems to most effectively reduce entropy through informational regulation of physical dynamics. Weakly coupled systems have disorganized elements and high entropy, while strongly coupled systems can tightly regulate microstates to maintain low entropy. Living systems tend towards higher coupling strength, enabling greater entropy management through hierarchically constraining physical elements via informational patterns. Thus, a spectrum emerges, positioning living systems not as binary entities but as points along a continuum defined by their ability to regulate and reduce entropy through the integration of informational and physical dynamics.


Featured image by DALL-E-2.

11 responses to “On biology and computation: a dialogue between Chris Fields and Tyler Clark”

  1. J Fern Avatar
    J Fern

    “Though we often use continuous mathematics (e.g. dynamical systems theory) to characterize living systems, their dynamics are discrete at every level we can probe experimentally; indeed, all physical systems are discrete in this sense. If QT is correct, spacetime itself is discrete. […] The theory of computability in fact describes (as far as I can tell) the behavior of generic quantum systems, of which (assuming QT is true) living systems are examples.”

    I struggle to see anything but the dogma of computationalism lurking in these sentiments. Why not leave it an open question? There hardly ever seems to be room to explore the possibility that biological systems portray an underlying architecture beyond classical computation. Not to mention that quantum theory lies on foundations of continuity and infinity — these are not merely conventions, but critical features of the theory.

    1. Zach C Avatar
      Zach C

      I tend to agree, but I also believe at some level without concrete example it just becomes philosophy of semantics.

      “computation” as a word barely means much when you realize that at most raw form, it’s just relationships, in all sorts of paradigms.

      Is it even possible for the idea that things are relational to be, controversial?

  2. Rob Scott Avatar

    Chris’ statement, “Consider the FEP [ Free Energy Principle ]. It says that persistent systems will behave in a way that maintains the integrity of their boundaries.” Is my new favorite way to describe the FEP.

    1. Benjamin Schulz Avatar
      Benjamin Schulz

      Another way Tyler could differentiate life from the base FEP hypothesis, is investigating all of the triggered coding of cellular death pathways. Apoptosis, ferroptosis, autophagic, pyroptosis, necroptosis, are all programmed kill commands that the FEP doesn’t have much of an answer for.

  3. Rob Scott Avatar

    I really wish this conversation continued…

    Michael, the final note from Tyler reminds me of the “Where is the memory stored?” question I imagine you get constantly.

    It’s all life to me, but… The distinction between “anything” persisting and something that’s storing information in a “model” that has sensing/acting capabilities with default survive/persist goals (as well as more subtle and complex goals) is very interesting.

    Thanks for posting this. 🙏

    1. Mike Levin Avatar
      Mike Levin

      it may well continue. I’ll post more if it does. Also I will be posting something on this Where is the Memory issue.

      1. Rob Scott Avatar

        That’s fantastic. Super grateful. 🙏

  4. Benjamin Schulz Avatar
    Benjamin Schulz

    I think Tyler should investigate different types of programming. What makes life different from other computational objects, aside from the multi-hieracharical polycomputational aspect, is the dynamic coding going on in the read-write stage. We have genes that can “turn-up” or “turn down” the dial or be shut off completely. Self-adapting coding that seems to have a robust methodology for overcoming an absurdly noisy environment is one way to look at how life is computational. Maybe we are more than dynamic programming that executes at runtime, maybe also at compile time. I don’t think we are LISP like in pointing back to lists

  5. Tony Budding Avatar
    Tony Budding

    Great stuff here. Thanks for posting it! Y’all covered quite a range of topics. A few thoughts:

    1. The human desire to extend life is based on attachment to our constructed senses of self (our own and others’). This attachment is natural and difficult to remove, but without it, the craving for continued perpetuation dissolves. Furthermore, life requires death, so arguably the worst thing we could do for the general health of the planet is drastically extend life (particularly human life).

    2. Complex human coordinated activities are map/model based, not computational. For example, if we’re hiking in the woods and need to cross a stream by hopping on rocks, our brains are not calculating distance and the exertional force required to move our mass that distance. Instead, we’re comparing the visual data of the stream and rocks with maps and models of our historical steps, hops and jumps. As we cross, we constantly adjust our use of the maps and models based on new sensory information (such as one rock being particularly slippery and shifting our center of mass).

    3. I agree with Mike here that clearly defining the boundaries of life is a less interesting question than exploring the myriad forms of intelligence (perception and response). TC’s description of macroscopic life and death is based on the activity and loss of unified coordination of the body as a whole. We naturally assume this unified coordination is a real thing (life), but it’s actually the highest stage of modularity of stacked and layered basal functionality. Again, we have attachment (which I concede is a perilous scientific principle) as the unifying principle stacking and layering basal functionality into higher stage modules. This attachment is essential to life as we know it, but without it, the urgency of unified self-perpetuation dissipates.

    4. Perception requires awareness, which creates an enormous conundrum for biology (as in, where does it come from because it doesn’t seem to be biological in origin). Response requires determined effort, which is necessary to decrease entropy. Determination requires both awareness of perceived data and attachment to an outcome. CF’s definition of FEP is precisely this play of awareness, determined effort and attachment.

  6. Benjamin L Avatar
    Benjamin L

    > I agree that there’s a spectrum, but I think it’s a spectrum of how we can best relate to the system, not a spectrum of objective qualities. Terms “living”, “machine”, etc. are only useful insofar as they pick out bundles of strategies one claims can be useful with that system.

    This is similar to what Lisa Feldman Barrett argues for emotion. Instead of treating emotions as natural kinds defined by objective qualities, she argues that emotions are subjective categorizations done by the brain for how well they fit the situation and prepare useful action. See her papers “Are Emotions Natural Kinds?” and “The theory of constructed emotion: an active inference account of interoception and categorization”.

    https://journals.sagepub.com/doi/10.1111/j.1745-6916.2006.00003.x
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5390700/

    Conversations like these are interesting and helpful. It provides context for the science, and context is that which is scarce. https://marginalrevolution.com/marginalrevolution/2022/02/context-is-that-which-is-scarce-2.html

  7. Lio Hong Avatar
    Lio Hong

    TC: “If there are, I bet there are lots of short-lived ones, too, that eventually decay back to free molecules that can then re-assemble in some new form later… We should probably get used to the idea that an ‘organism’ could be a highly spatially distributed thing with parts that only directly interact occasionally.”

    These statements sound conceptually similar to Nick Lane’s mention of recurrent cycles to create persistent molecules in his origin-of-life research.

    They also happen to remind me of reincarnation, which proposes some sort of continuity between human lives based on moral conduct. I know this kind of abstract, non-technical topic isn’t really the focus of this blog, but one of the references in ‘Computational Boundary of the Self’ mentioned it briefly, in addition to ‘Biology, Buddhism, and AI: Care as the Driver of Intelligence’.

    It would be interesting to explore its basic implications in various aspects of life, but English-language literature focuses mostly on descriptions, evidence, morality and syncretism. Possibly there are papers written in South Asian or East Asian languages, which are less accessible.

Leave a Reply

Your email address will not be published. Required fields are marked *