Below, you will find some recent recorded talks, subsequent audio Q&A, relevant links mentioned, and discussion text on topics of computation, basal cognition, and reality, from: Joe Dumit, Bernardo Kastrup, Richard Watson, Alexander Ororbia, and Don Hoffman.
Joseph Dumit’s “Neuroexistentialism, Improvisational Research, and Slow Partying”:
Q&A following the talk:
Links:
Chapter 4: Lagging Realities: Temporal Exploits and Mutant Speculations
Chapter 8: Plastic Diagrams: circuits in the brain and how they got there
Bernardo Kastrup and Richard Watson discussion:
Here is the list of topics Richard wrote for this talk; some will be covered in the impending Part II.
Here is a discussion between Alexander Ororbia and me, in the context of this paper:
Mike: I heard an interesting question by Stephen Wolfram today and I was wondering what you thought. He was pointing out that since nematodes (like C. elegans) have invariant nervous systems with the exact same number and type of cells and connections between them, it could theoretically be possible to move/copy mental states from one C. elegans to another. Personally I don’t think the secret sauce is just in the neural network anyway, but putting that aside, what do you think of situations where the sender and receiver hardware are identical – do they not count then as mortal computers?
Alex:
[To start, I actually agree with you, Michael, that I don’t think the “secret sauce” is just in the neuronal networks either, there’s even the notion of cellular memories that I think already points to there’s more to cognition than the more recently evolved brain –of course, this is what basal or elementary cognition, at the base of mortal computation, brings forth]
The case you raise — where the substrate is (absolutely identical / a perfect clone/copy of its source substrate) — is very interesting to think about. After rewriting my email a bunch of times, I think I’ve come to saying: Yes and no, depending on the nature of the substrate cloning.
Why the yes part of the answer? Well, I was remembering, from studying long ago when I was a young engineer diving into philosophy of mind, some aspects of theories related to “streams of consciousness”. There was a particular thought experiment I recall from that time that I think is relevant to the above situation/case raised (biased a bit by the imperfectness of my memory and biased by some of my and Karl’s favorite words):
Slightly Modified Thought Experiment: If, when Captain Kirk is transported/beamed to another planet from the Star Trek enterprise, let’s say he was actually destroyed/disintegrated completely while on board the ship after the moment he entered the transporter and, at the destination planet, he was exactly reconstructed physically, including a perfect copy of his mental states, etc. at the time right before destruction, would he be the same Captain Kirk that was on board the Enterprise before he was disintegrated/killed?
Some theories of identity say he wouldn’t be the same Captain Kirk — the original one is gone for certain, his existence has terminated (thus, that Kirk on the enterprise was not able to remain in his non-equilibrium steady state solution; NESS), and this perfectly reconstructed Kirk is a fork or split off the original Kirk consciousness stream. This forked Kirk would acquire his own experiences and follow his own trajectory — and might even think he is just Kirk — but he is not the original Kirk, he is simply a different one that happened to have a unique, artificial entry point into existing.
I think both Kirks could be argued to be mortal computers, since both certainly are equipped to work to remain near their NESS, but they are different mortal computers, each with their own traits (particularly as the newly spawned/forked Kirk moves forward in time) even though they somewhat share a unique point in time where one was obliterated and the other was immediately born — if we follow the “divergence of identity” philosophical line of thinking from my perspective, we’ve just effectively created a new mortal computer that just happens to, at its starting point of instantaneous existence, share similarities for a brief bit of time to the original one that was destroyed. But we didn’t really transfer the software from one to the other, we’ve just made one mortal computer from cloned elements of another. In some sense, this is a very strange/odd form of reproduction (since, in Karl’s and my paper, replication with mutation/noise is “reproduction”).
This old thought experiment made me think you could maybe “fork / spawn” (offspring) mortal computers, but you cannot technically “transfer” the software between them. The original program/software died with the substrate that was destroyed. Now, funny enough, the divergence of identity thought experiment does, if I recall, does talk about the case where if the original Kirk was NOT destroyed and a new exact copy of the original was synthesized on the target planet, would these Kirks be the same? The answer is also technically no, because now you could argue that these Kirks are not and cannot occupy the absolute exact point in space (thus the niche). The new Kirk would simply be a different Kirk mortal computer that just happened to have at one time t to share the exact same substrate properties of the original Kirk. The new Kirk, based on the work Karl and I surveyed in our paper, would effectively be a replication subjected to some noise (since we are not super-imposing both Kirks into the same exact spot in space and time); thus the forked Kirk mortal computer becomes subject to natural selection and would then pursue its own quest for stability; this artificial “offspring” of the source mortal computer (you could say though that the artificial offspring Kirk has had aspects of its identity propagated over time in a sense, but it is not the same identity as the source Kirk). This is compounded by the fact that Karl and I argued, or rather echoed the aggregation of about a century of voices/thinkers, the mortal computer is built on its embodiment, enactivism (it is vivaciously coupled to its niche [Footnote 1]), its embeddedness (in a collective/larger system of other mortal computers), and its extension (its connection/use of non-mortal objects to offload cognitive functionality) and technically elementary cognition (which is basal cognition, which I assume is being folded into the copy of the hardware or substrate).
So, why the no part of the answer? Coming from the last paragraph of the above line of thought, we see that the “forked” mortal computer is indeed a mortal computer, as its substrate/body would be equipped (we could even loosen the requirement for an absolute perfect copy, so long as the new hardware/body facilitates the new mortal computer to engage in a quest to returning to be near its NESS). However, as I qualified earlier, being a mortal computer in this case would depend on the nature of the cloning/creation of the identical substrate. The problem is, I have repurposed the identity divergence theory assuming we start with an existent mortal computer that would already satisfy the basic components of the informal definition Karl and I gave. There is a “transference” here, but it is not the way do program transfer in today’s immortal computation — this is form of reproduction (replication + noise); the other way mortal computers transfer knowledge is through communication (which is partly what the embedded aspect of mortal computation brings to the table), though this is going to be through a noisy channel and the transfer of information from one mortal computer to another will be imperfect, of course.
When we consider machine intelligence, where we can literally copy software programs across GPU/CPU/TPU servers, such as a deep neural architecture underlying Chat-GPT that has been pre-trained on lots of data samples collected from the internet, this is because this computer program was designed independently of anybody [Footnote 2] — it was designed in terms of immortal computation (the body, niche, etc. do not matter); it specifically never underwent or needed to undergo its own quest to remain near a NESS, where the compute was brought as close to the in-memory processing inherent to the substrate as possible (where the objective approaches the one of thermodynamic considerations, where the software now is entangled with the hardware). The software that characterizes the amorphic programmatic structure of Chat-GPT is a form of immortal computation that actually would operate fine and essentially identically because it was not shaped by its substrate, the elementary considerations of the substrate (and its evolutionary trajectory), the substrate’s interaction with a particular niche, etc. But Chat-GPT and modern-day machine intelligence programs are immortal computers — things like Xenobots/Anthrobots or organoids are mortal computers.
A great question (and experiment) would be, are we able to actually copy the “computation/software” currently contained to an extant mortal computer such as a Xenobot and transfer it to another Xenobot mortal computer that had exactly, absolutely the same body as the source Xenobot (though I realize this may not be possible in biological practice) — effectively, we overrode whatever software was in the new target Xenobot clone body with the knowledge/software of the source. I’m not entirely sure what this would look like, however… [Footnote 3]
[Footnote 1]: A mortal computer’s cognition/functionality is the result of exercising its sensorimotor systems of its body in exchange with its niche. In effect, the mortal computer is an active participant in the creation/production of the information that is processes, thus the mortal computer will very much be shaped by the consequences of how it acts and how it has acted.
[Footnote 2]: Note another dimension of mortal computation, as Karl and I wrote, is that the computational processes are deeply co-designed (this co-design is also part of the definition) with the substrate and then shaped by interaction with a niche, a collective, and non-mortal objects, (The software is also considered to be part of the internal states behind the Markov blanket.)
[Footnote 3]: The reason I went with Xenobots, besides them being a wonderful example of mortal computation is I originally was thinking of “head transplants” I had once read have historically been considered — I cannot speak to them beyond just awareness of the concept, how ethical they could even be, and possibly some not successful attempts in history to do so(?) — and the question as to whether the re-attached brain elicit the same personality of the source human individual? I also thought about the fact that our gut microbiome has effects on our neuronal / cognitive function / emotional state-of-being — which is an example of the software being quite dependent/entangled with the hardware/substrate — and some of the reports from organ recipients “remembering” events from their organ donor’s life, due to the possibility that cellular memories are stored outside of the brain and thus may transfer from organ donor to recipient [I’m not aware of how deep any study on this has been though, however]. I do think these, however, provide some interesting aspects of mortal computation as opposed to immortal computation (where the same program would run identically across computing platforms).
Another small thought that I forgot to include — I think also, based on our zoom conversation a few weeks ago, that could be a spectrum of mortal computation too — Karl and I already had started some basic classification of levels/kinds of mortal computation, but we didn’t write about, for example, a computing system that was only embodied but not enactive, or one that embodied and enactive but not embedded or extended. I’m also not entirely sure if it even makes sense to decouple these things, as a body exists in a niche, and a niche contains, usually, other entities and non-mortal objects…(so this last thought might be diverging into the nonsensical).
A discussion between Donald Hoffman and MM:
MM: I’ve thought along your lines since I was a teenager, that obviously, we haven’t evolved to see the truth. I agree with that. The only problem I have with the theory, and I read the paper you sent in the email, so maybe I’m still missing something, is when you’re talking about scattering amplitude, scattering, the formula, the problem I have is the Pauli exclusion.
Don: Oh, so by the Pauli exclusion principle, you’re talking about spin-half particles can’t be in the same state, right?
MM: Yes, with like half-integer spins, yeah.
Don: Perfect, right. So how do you see that that’s connected with the flower example?
MM: Because what you’re saying is there are no physical laws independent out there, right? But there have to be, or else there’s going to be atomic overlap. If a flower doesn’t follow a set of objective parameters outside of my experience of the flower, then when the bee interacts with it and I interact with it at the same time, there is going to be an overlap, right?
Don: Here’s one way to think about it. You can imagine a virtual reality game in which you have a multiplayer game, but what you’ve done is you’ve made completely different worlds for the players. They’re completely unrelated, and yet the players can play a common game. One player might think that he’s hitting a hockey puck, another one might think that he’s throwing a muffin or something like that. They can interact, and if they actually changed headsets, they’d be surprised. They thought the other person was seeing what they were seeing, but in fact, they weren’t seeing it at all, and yet you could have them coordinate.
MM: Okay, but wouldn’t you see inconsistencies happen?
Don: So, for example, people who are red-green colorblind actually see the world differently from the rest of us. Their colors are different, and there are women who are called tetrachromats. They have a fourth color receptor, and so they see the world differently from us. Yet, even though there are these significant differences, we can all interact. So, space and time and even things like the Pauli exclusion principle are principles that are specific to our particular little headsets. There’s going to be a deeper theory of consciousness that gives rise to space-time and the Pauli exclusion principle, for example, as a projection of a much deeper theory that doesn’t care at all about particles and Pauli exclusion. The particles, even their definitions, including the Pauli exclusion principle, all of that is stuck in the headset and is not a part of the deeper reality.
MM: Okay, so let’s say it is some kind of video game. Are you saying basically that you have a global space versus a local space when you’re coding it?
Don: Right, so maybe one clue here is that the stuff we see in our headset, the matter and energy that we see, is apparently a very small minority of what’s really out there. Dark energy and dark matter completely dominate our universe. The real stuff that’s going on, we don’t even see it in our headset, and we know it. We’re scratching our heads. That’s what headsets do. Our headset is throwing out most of reality. So yeah, we have some kind of consistency between us and other humans on the little part that we share. It’s possible to create a series of games in which people are playing, and each thinks they’re playing a certain kind of game, and the others think they’re playing an entirely different game, and they can be coordinated.
MM: I’ve got a kind of dumb example that may be relevant. Apparently, bees feel the electrostatic field of flowers. They know where to land. If you ask a bee where the flower stops, I think it will say a centimeter above where we think the flower ends because, as far as the bee is concerned, that electric field is part of the flower. It’s plausible for different views on the world to be different but not screw up the interaction to the point where we can’t cooperate.
Don: Well, there are certain things that do go through walls. We call them radiation. We don’t know what they are, but there are things that go through walls. We just don’t happen to be able to go through walls.
MM: Right, but I do agree with Mike’s emphasis on understanding to what extent there is concordance. We do live in different worlds, all of us, but in some ways, there is some kind of synchronization or something. It’s not trivial to make two different video games that actually match. It’s pretty tricky to make it consistent such that both people think they’re doing different things, yet it’s consistent.
Don: There was an experiment done at UC Irvine by Lewis Neurons and collaborators where they got color-normal people and colorblind males in an experiment where they had to agree on boundaries for color and give them arbitrary names. They had to converge on a set of shared boundaries. What was remarkable was that the colorblind people had an outsized influence on the final result. It was their limits that set the parameters for everybody else.
MM: We look through nature and see insect and animal behavior that we think is absolutely stupid. For example, insects flying into oil slicks because they look at the polarization of water. Oil has much higher polarization than water, so it’s a supernormal stimulus, and they go to their death. Everything dies. Every interface finally fails you.
MM: Talking about interfaces, where most people hop off the bus is the cause and effect that clearly happens in the brain. When you’re thinking about brain injuries, strokes, or dementia, the character that was there before the pathology onset is obliterated, and a new character emerges. In your opinion, where is the actual headset? Where does the interface start and end?
Don: You have this vast network of conscious agents, and it’s infinite. But you have a sub-collection of conscious agents that can compute an interface effectively and project some of the dynamics of the whole into this little interface. When I take that network and turn it to look at the network itself, I predict you’ll see neurons and brains. Neurons and brains are what you see in your interface when you turn the network that’s creating the interface to look at itself through its own interface. The brain is the interface description you get of the actual conscious mechanism that creates the interface. The neural correlates of consciousness are not because the brain causes consciousness; it’s because consciousness causes the brain.
MM: But there’s so much cause and effect in biology, like in medicine. There’s a disease process brewing inside a body, and you have symptoms before you open the agent up and see the tumor. There are so many intricacies. Why wouldn’t we have evolved to just have a simpler rule set?
Don: Something exists, and that something is this infinite network of interacting conscious agents. When we open up a skull and investigate the neurons and glial cells, we’re using our four-dimensional headset to probe further into that network of conscious agents. There are systematic relationships between our headset and that network, so we can predict what we’re going to see. Cause and effect might be an artifact of our headset. It’s possible for us to write down a Markovian dynamics of these conscious agents in which the entropy does not increase. When you take a projection of it that loses information, the projected version of the dynamics will have an increasing entropy, and you will have an arrow of time. Cause and effect is our dumbed-down interface representation of a more holistic reality that doesn’t care about cause and effect.
MM: So, I was speaking with a physicist on Twitter, and we talked about the holographic principle and black holes. Essentially, you could say there’s nothing inside your body until it’s technically opened because everything is surface area only.
Don: Right. The holographic principle suggests that space-time is just an interface. Suppose you take a sphere with a certain volume and take six smaller spheres that just pack inside that bigger sphere. The six smaller balls can hold more information than the bigger ball because they have a greater surface area. The holographic principle is onto something because it’s telling us that space-time is just an interface.
MM: Talking about the wave function, what can cause a collapse? What is signaling the collapse?
Don: The Copenhagen interpretation is different from the Cubist interpretation. Bohr and Heisenberg thought measurement was important but not the conscious observer. They wanted to step away from the consciousness of the observer. Cubists say it’s the mind of the conscious observer that is collapsing the wave function. If I roll a dice and update my probabilities, the collapse of the wave function is just the update of probabilities by a conscious agent. The Cubists need a theory of agents to explain this.
MM: So, how do you define observation in this context?
Don: We define an order on Markov chains. One Markov chain is less than another if it’s a trace of the other. This defines a non-Boolean logic of observation. The observation itself is an integral part of the big system. We can start to talk about why things look like a collapse of the wave function. When you take a projection of a Markov chain, the stationary measure on that subset of states is the same as the stationary measure from the original chain, just renormalized. This is what we call the trace chain process.
MM: That’s interesting.
Don: The physicists are finding these positive geometries outside of space-time. What caused these positive geometries? Some dynamical system. We’re going to show that the positive geometries are a projection of a much richer world of conscious agents.
MM: Talking about the chain, have you read Carlo Rovelli’s work?
Don: Yes, Rovelli is talking about agents inside space-time, not agents outside of space-time. Panpsychism views certain fundamental physical objects as having consciousness. I’m good friends with Philip Goff, who is a leader in the push for panpsychism. I’m trying to persuade him that panpsychism gives too much credence to space-time. Consciousness should not be restricted to the particular trivial laws of physics of our little headset.
MM: I agree. Panpsychism is halfway. You need to go to what philosophers call idealism or conscious realism.
Don: Right, and the physicists are finding static structures outside of space-time, the positive geometries, but they’re not discussing dynamics. At some point, physics will have to deal with the uncomfortable question of what kind of entities outside of space-time are involved in the dynamics. We’re talking about dynamical entities not inside space-time, which are what we call conscious agents.
MM: I have a quick question about bioelectric fields and how they might affect the gut microbiome.
Mike: We’ve done some work on endogenous microbiota and how they hack the bioelectric interface to modulate their host. For example, in planaria, bacteria can manipulate the host by messing with proton flux, causing the worms to end up with two heads. Bacteria certainly feel and respond to the bioelectric properties of their environment. We’ve also found that the suppression of tumorigenesis can be controlled by bioelectric signals, which have a bacterial intermediate.
MM: Could you send me the references? Ever since COVID, my gut microbiome has been messed up, and I’m working with doctors to help me with it.
Mike: Sure, I’ll send them. The work on the human microbiome is still in its infancy.
Don: Yeah, I think a lot of people who’ve had COVID have had their gut microbiome altered. It’s a bacterial phage that changes the profile of your bacteria.
MM: I had a dream last night that I was drinking a glass of water, and at the bottom, there was a little axolotl swimming in there. I wondered if I needed antibiotics or something. Do you work with axolotls?
Mike: A little bit, not too much. They’re amazing animals. I once received some fan fiction about giant axolotls used in the Civil War to regrow limbs.
MM: It’s incredible. Future humans will look back at us and be amazed at how we lived.
Mike: I did an informal poll on Twitter once, asking if you were the first caveman who figured out fire and saw everything that followed from that, would you keep going or put it out? Six percent said we shouldn’t have had fire.
Don: People will be amazed at the stupidity of physicalism and how it was the received opinion. Eventually, the view of consciousness as fundamental will lead to brand new technologies that will be truly astonishing. We won’t have to go through space-time; we can go around it.
MM: I’m still wrapping my head around everything you’ve said. It’s been really good.
Don: I’ll be happy to talk with both of you again.
Mike: Thanks so much, both of you. Really fun.
Don: Thank you for sending me those links. I’ll be very interested to see your work on it.
MM: I will send them shortly. Thanks again, Don.
Don: Thank you, Mike.
Featured image by Midjourney.

Leave a Reply to Billy Cancel reply