Platonism, Process Philosophy, and more: Tim Jackson and Robert Prentner

Published by

on

Here is a discussion between Timothy Jackson and Robert Prentner about the theme of non-physical influences of patterns from the Platonic space (related to my recent paper on it) and the philosophical background and history of ideas. Here is their discussion, and a transcript of an interesting email exchange they had on this topic:

Tim:

I position myself philosophically in a tradition that is less “anti-Platonic” than “anti-preformationist”. We can certainly recover a notion of forms/ideas, but we mustn’t thereby think we’ve solved the problem of genesis. The forms themselves must be accounted for, or they are naked posits, with their own explanatory “debt”. This, ofc, is a non-trivial philosophical and scientific task, if one does wish to go “all the way back” (i.e. to the thought of cosmogenesis). 

Here’s Charles Sanders Peirce, with a couple of nicely orienting quotes:

“The evolutionary process is, therefore, not a mere evolution of the existing universe, but rather a process by which the very Platonic forms themselves have become or are becoming developed.” (CP 6.194)

“In short, if we are going to regard the universe as a result of evolution at all, we must think that not merely the existing universe, that locus in the cosmos to which our reactions are limited, but the whole Platonic world, which in itself is equally real, is evolutionary in its origin, too. And among the things so resulting are time and logic.” (CP 6.200)

….which is not to say that Peirce quite solves this problem. But here he is in a network of “process-relational” thinkers that one might trace back (in the “Western” tradition, though there have been many non-Western thinkers who moved in very similar directions) to Heraclitus, and which has its resurgence in the modern era of philosophy in thinkers such as Schelling, Nietzsche, Peirce, Whitehead, Heidegger, Ruyer, Simondon, Deleuze, Guattari, Schürmann, and others.

Robert:

I am very sympathetic to Platonist-like ideas for the following reasons:

The first reason pertains to my interest in the notion of “consciousness,” which is a slippery concept (as everyone is prone to say). In particular, I would distinguish between a sense in which “consciousness” points to something that is fundamental and independent of individuals and a sense in which “consciousness” points to a function of the processing of information within a systems (e.g. “generated by the brain”). I guess both readings have their justification. While I am personally inclined to favor the former idea, “consciousness as fundamental,” I am also very intrigued by the question of how to bring those two ideas together. And here, a Platonist-like model seems to be very promising. 

The second reason, which I think is connected to the first, is that I am a proponent of what is known as Don Hoffman’s Interface Theory, according to which we perceive reality as it is useful to us (evolutionarily) but not as it “truly” is, independent of an observer/organism (i.e., “veridical” as perception scientists call it). While this thesis is mainly about the nature of perception, it seems to me to be true for many (all?) kinds of ways of interacting with the environment. Platonism is a promising way out of the misery of how to coherently bring together a (technologically-mediated) form of access with a skeptical outlook (on empirical knowledge). We create (via various technologies) “portals” (as Don Hoffman would put it) or “pointers” (as Mike would put it) into a realm behind the phenomena. 

Thirdly, I am a professional philosopher, so I am interested in a modern reading that says something about technology. I also like to contextualize ideas from Plato, Kant, etc., in a more contemporary setting. This also extends to various modern philosophers; my favorite one being Whitehead. I am also very interested in phenomenology (Husserl etc.) and Pragmatism (Peirce, James etc.). But note that this is very complicated, as those people “ingressed” (to use Whitehead’s terminology) a whole history of philosophy that came before them…

Tim:

“The first reason pertains to my interest in the notion of “consciousness,” which is a slippery concept (as everyone is prone to say).”

I’m certainly interested in “consciousness” too, but I wouldn’t say it’s a first-order concern. Perhaps the primary desideratum for me, that brings together a lot of the somewhat disparate areas of investigation I work in (from molecular biology to musical improvisation) is the question of the origins of “structure”, which is to say the origins of novelty

One of the reasons I am less motivated by consciousness studies here is that I think they are simply too mired in the “hard problem”, which I think of as being a false problem, in the sense in which Bergson or Deleuze would characterise the blind alleys in philosophy. Whilst there are specific issues with Chalmers’ influential formulation, the problem is of course much deeper – it’s predicated on a faulty, and historically contingent, ontology. 

I keep a very open mind about possibilities here, but I think what is required is a de- and reconstruction of this ontological legacy. This is both a question of the history of philosophy and of very contemporary scientific research programmes (including, of course, Mike’s). So it is very hard to pronounce on the viability of certain philosophical “stances”, without undertaking this ontological task with a sufficient degree of rigour. 

“The second reason, which I think is connected to the second, is that I am a proponent of what is known as Don Hoffman’s Interface Theory, according to which we perceive reality as it is useful to us (evolutionarily) but not as it truly is (i.e., “veridical” as perception scientists call it).”

I have to confess I don’t know Donald Hoffman’s view as well as I might. I have watched a number of videos of him discussing it, read a couple of papers, and the introduction to his book The Case Against Reality. I do not wish to straw man his perspective based on this limited engagement with it. However, when it is presented as a dichotomy between utility (or fitness) and Truth, alarm bells are ringing for me. Whitehead would call this the bifurcation of nature, and one of his fundamental desiderata is to move beyond it. 

I favour a relational ontology, and this has the consequence that “truth” is context-dependent – those contexts can be exceptionally broad, or exceptionally thin (temporally and spatially). This is the same claim as is made when we say that “fitness” is context-dependent in biology. We do not suppose an additional, static, globally invariant frame to which the organism has no need of adapting (the organism is of course well-adapted to the broadest frames, or it would not exist in the first place). We’re saying that the real is incurably heterogenous, that what is apt over here might not be over there. 

Separating a truth metric from a fitness metric in a computational model seems rather like presupposing Platonism. The specific aspect of Platonism that is being presupposed here – as in Kant’s distinction between the a priori and a posteriori – is Aristotle’s systematisation of the theory of forms in his doctrine of hylomorphism. We’re starting with the assumption that there is a globally invariant Truth, and then preserving this truth (as a valid deductive model preserves the truth of the axioms). 

Now, I do love Whitehead and Peirce, but each of them in their way, despite their claims to be “inverting Platonism”, maintains a significant vestigial “Platonic” aspect to their systems. One might say this is hardly surprising for logicians, despite the fact that both of them work extremely hard to give empiricism a certain pride of place in their schemes. 

Whitehead maintains a hylomorphic residue in his philosophy in the form of the ”eternal objects” (although I note my colleague Matt Segall’s objection to this way of framing things, and indeed Whitehead’s entire project aims to overcome hylomorphism at some level!). What’s key here, for those of us exercised by questions of ontogenesis (as containing the potential for explanatory knowledge about the particular forms we encounter) is that Whitehead’s eternal objects have their origin in the Primordial Nature of God, and God’s own origin is “absolutely irrational” for Whitehead. Thus, the EOs, which are (pre)definitebut “deficient in actuality” (as God himself is), which represent the predefined matrix of possibilities, have an absolutely irrational origin. Whilst his doctrine of prehension, ingression, concrescence, etc (his magisterial process ontology) attempts an explanation of “relevant novelty”, the origins of forms themselves are inexplicable. This is a mysterian position – in-keeping with the hylomorphic legacy. 

I’d suggest we can take a quite different tack regarding the origins of novelty, as well as the nature of perception. I’ve already written far too much, but I think it important to consider what we’ve learned since the 70s about the spontaneous origins of structures as a consequence of dissipative adaptation. This is only a piece of the puzzle, certainly, but the apparent isomorphism between theories of perception (e.g. the FEP), and non-equilibrium dynamics in general (cf. Hermann Haken’s synergetics) is very suggestive. 

In particular, when we think about cognition as “coarse-graining” (and the attendant issues of the frame problem, relevance realisation, etc) I don’t think we need to posit a “thing-in-itself” which is essentially static, and being coarse-grained by our perception (resulting in the view that our perception is not of the “veridical” nature of the thing, just some useful schematisation of it). Rather I think we might consider the notion of capturing the sufficient statistics of embedding environments as producing not falsity, but “veridical” perspectival/aspectual knowledge. And then look back to dissipative structures – formally, they are “coarse-grainings” of the microstates of their “environment”. This is what an attractor-selector looks like, but crucially, it is not simply summarising its embedding environment, but rather canalising it, shaping the distribution of micro-states directly, by virtue of its being immanent to them (it is nowhere else). This is the pathway to active inference. 

The key here is that coarse-graining is not just what cognisers must do, but how structure emerges in the first instance. This is a basic selectionist principle. This results neither in relativism (it’s a robust relational realism) nor some magical doctrine of “manifestation” in which our cognition directly manifests reality from some otherwise incoherent flux. It is an evolutionary/ontogenetic model, involving a temporal hierarchy of constraints, and complex systems describable in terms of nested constraint functions (attractors, non-equilibrium steady states). Cognition is “just” a version of a generic operation, in this view (driven spin glasses do it, too!). What one is moving towards here could be described a “generalised theory of the observer”, but it’s not an “idealism” per se, indeed it’s closer to a hybrid of vitalism and animism (but I won’t develop that argument here). 

Whitehead was working before the advent of non-equilibrium thermodynamics, and before the complex systems science that develops from the union of cybernetics, non-equilibrium thermodynamics, and chaos theory, amongst other things. He is a generaliser of evolution (as is complexity theory), but working prior to the other key fields involved. So perhaps we cannot blame him for believing the forms needed to be posited – this is his solution to the grounding problem, and something I and my Whiteheadian colleague Matt Segall have been endlessly debating for years. 

However, his broader ontology (minus the “absolutely irrational” origin of the forms), is strikingly consistent with a more thoroughly constructivist view. As far as coarse-graining and abstraction goes, he even says (in his book Symbolism): 

Abstraction expresses nature’s mode of interaction and is not merely mental. When it abstracts, thought is merely conforming to nature, or rather, it is exhibiting itself as an element in nature.” 

Several other philosophers, directly or indirectly influenced by Whitehead, extend this kind of thinking, e.g., Raymond Ruyer, Gilbert Simondon, Gilles Deleuze. Remarkably, two of the three (RR and GS) were themselves working prior to Prigogine’s monumental discovery of dissipative structures (but anticipate it). They were both huge influences on Deleuze, and Prigogine himself said that Deleuze was the most important philosopher for him in aiding the contextualisation of his discoveries (though of course he references Whitehead a fair bit, too!). 

Each of these philosophers has a robust notion of the “virtual” (all are influenced by Bergson), which corresponds to Mike’s “latent space”. Importantly, however, they each try to recontextualise Platonism in a more fully dynamic manner. Simondon probably goes the furthest here – his whole project is about trying to do away with the necessity for any pre-definiteness when we are considering ontogenesis, or what he calls (following Jung) “individuation”. 

There are, it seems impossible to doubt, “non-physicalised” or rather “non-actualised” possibilities (I am not advocating actualism). But they need neither be primary, pre-definite, nor static. Actualisation is the coming to definiteness under the influence of context-specific constraint functions. But any actuality has a (potentially vast) set of adjacent possibilities (pace Stuart Kauffman) associated with it. Deleuze would refer to this as “counter-effectuation” – for him the differentiation of virtualities is effected by the process of actualisation, which is the process of coming to determinacy (definiteness in the actual precedes definiteness in the virtual). 

Further, the operations performed by “form” and “matter” in the hylomorphic scheme, do not (in actual fact) correspond to ontological (or substantial) categories. Here’s Simondon on the technical schema from which Aristotle’s hylomorphism is originally derived – the moulding of bricks: 

“The technical operation of form-taking can therefore serve as a paradigm if we require this operation to indicate the veritable relations that it institutes. However, the relations are not established between the raw matter and the pure form, but between the prepared matter and the materialised form: the operation of form-taking doesn’t just suppose raw matter and form but also energy; the materialised form is a form that can act as a limit, as the topological boundary of a system.”

Apologies, this email is far too long. I do think these are crucially important issues to work through together – for all its length, this is nothing more than a cursory pre-amble. 

Robert:

I have to confess I don’t know Donald Hoffman’s view as well as I might. I have watched a number of videos of him discussing it, read a couple of papers, and the introduction to his book The Case Against Reality. I do not wish to straw man his perspective based on this limited engagement with it. However, when it is presented as a dichotomy between utility (or fitness) and Truth, alarm bells are ringing for me. Whitehead would call this the bifurcation of nature, and one of his fundamental desiderata is to move beyond it. 

Yeah, I guess that’s a problem. The notion of “truth” that is often talked about, is typically of the a-contextual, “truth-as-correspondence” to some extra-individual given “out there.” This is the notion of truth that most (I would estimate: 99%) of scientists typically work with. It is this notion of truth that is rejected by the view. Of course, one is free to choose any other notion of truth, and you will come to very different conclusions. The devil is as always in the details, e.g. how to define words such as “coarse-graining,” “microstate,” or “sufficient statistics”… (if you do not like the word truth, which carries a lot of baggage, one could opt for the word “structure” instead). 

Tim:

Re: Hoffman’s view and the rejection of truth/”reality”, I worry this is a version of “refutation by reification” (of course there’s an etymological issue here wrt to “reality”, but….) – we define Truth/Reality as this static Absolute, and then (readily) demonstrate there can be no such thing, so we dustbin the entire concept of truth/reality (etc). This is the road to anti-Realism. 

On the other hand, if the thesis is about perception (or epistemology), or is a “theory of access” (or a “correlationism” in Meillassoux’s language), the claim might be being made that there is such a thing-in-itself, but we simply cannot know it. Substituting “structure” for “truth”, it would be the claim that there is a global, objective, structure to Reality, but as finite beings we cannot know it – we are limited to distorted, lossy, perspectives on it. This is a classically “Platonic” claim, which comes to us via a distinctly Christian inflection. The “mechanical philosophy” of Descartes, Newton, etc (and all modern deterministic cosmologies) is the direct correlate of this view, and itself in turn the origin of Kant’s regulative principle (the “as if”). 

Again, I don’t really know Hoffman well enough to know which of these claims he’s making, or if he might be conflating the two. Can you clarify further? 

Robert:

First and foremost, it is intended as theory of perception, so I guess option 2. Now, it is right that the quasi-Kantian “cannot-know-the-thing-in-itself” line is suggestive, but as you rightly pointed out in your last email, you do not need to presuppose a “thing-in-itself” in the first place (which was/is the motivation for a lot of philosophy post Kant). Hoffman’s argument is similar to what is done in Kant’s transcendental aesthetics but based on an evolutionary instead of a transcendental argument. The further consequence, namely that one is forced to subscribe to the “simply-cannot-know” thesis, is not really entailed (though it is suggestive, but ultimately Hoffman would want to resist it).  But I digress…. 

I guess the argument underlying the interface theory, framed this way, is merely that if you start with a belief in a pre-dedfined structure (and if you make certain other assumptions, e.g. “perception as tuned to fitness”), then you will end up in a situation where you have to conclude that your perceptions do not mirror this pre-defined structure in the end (thanks to evolution, it is not just lossy but completely wrong). So, if you make an intuitive assumption (i.e. existence of a pre-defined structure), then you get into trouble. Of course, the million-dollar question is what other assumption should you do instead of this intuitive one. I think that it is here where a “dynamic version” of Platonism might be interesting. 

Tim:

That’s helpful, thanks! 

I think what ends up being “not lossy, but completely wrong” is indeed precisely the posit of the “pre-defined” structure. Constructive evolution (or processes of actualisation) is the “coming to definiteness”. This is the reason I feel we should be wary of the standard notion of “Platonism” (which is not exactly Plato’s doctrine!), which is predicated on just such a posit. 

I’m not sure there’s necessarily a dichotomy between an evolutionary and a “transcendental” argument – post-Kantian philosophy is very sharply divided on this, as you know. In more contemporary scientific/evolutionary contexts, the transcendental is related to initial conditions/broken symmetries, and their tendency to disappear beyond horizons.

This moves us in the direction of a Whitehead or James (with “speculative empiricism”), or Deleuze (“transcendental empiricism”). The upshot is similar to Kant’s – it remains a thesis about the limitations on certain kinds of knowledge, but interpreted explicitly in terms of situatedness. We cannot see everything from “here”. This is deeply related to the reciprocity of the actual and the virtual, which takes us in the direction of a “dynamised Platonism”…..

….about which I agree, there are many devils in the details. One of which concerns the “inversion” of hylomorphism – the priority of “matter” (or rather concrete actualities, since there is no “matter” in the hylomorphic sense of a characterless, “passive” materiality) over “form” (the existence of definite forms is predicated on their adjacency to concrete actualities). 

I do wonder if the assumption of the pre-defined structure is “intuitive”, or rather socioculturally (historically) contingent?

Robert:

To answer your question: Yes, I too believe that what is “intuitive” is socio-culturally contingent. (Having had some experience with both science and humanities, I often encountered the situation that the scientists hate it when you say that; by contrast, the humanities hate it if you don’t say that). 

Tim:

Which goes to your point that scientists are implicitly “Platonists” 😉

 ….or this itself too essentialist!? 


Featured image by Midjourney.

7 responses to “Platonism, Process Philosophy, and more: Tim Jackson and Robert Prentner”

  1. Micah Zoltu Avatar
    Micah Zoltu

    Do you have a link to the paper you referenced where someone had a reduced brain volume but above average intelligence?

    1. Mike Levin Avatar
      Mike Levin

      Yep, it’s covered and referenced here: https://osf.io/preprints/osf/fqm7r_v1 (real paper is accepted and should be officially soon).

  2. Benjamin L Avatar

    It’ll be interesting to see how the philosophy of truth and beliefs and so on changes as evidence about the unification of action, perception, cognition, and development accumulates.

  3. Cameron F Avatar
    Cameron F

    Hi Michael,

    Can consciousness follow like a mathematical proof follows from specific lemmas? Is it verified in a structural sense?

    Here’s my thought experiment on the Strange Machine:

    Computer A
    Standard setup:

    CPU → Memory
    Memory → CPU

    The CPU reads from memory, processes data, and writes back, forming a causally closed loop.
    We stipulate that A is conscious—its computation forms a structured process that instantiates an observer.

    Computer B
    Identical to A, except memory is cut and replaced with a random source that miraculously outputs exactly what the memory would have.

    From an external perspective, B behaves identically to A.
    Now, the question:
    If A is conscious, is B?

    A’s consciousness seems to be verified—its computations causally follow from past states. But in B, although the outputs match (B receives the right signal – say 1001 – but from the wrong place), they are not causally derived—they appear from an independent, unconnected source. This is like a line in a proof or mathematics problem where you use the wrong reasoning or mechanics but accidentally get the right answer.

    I think this suggests that causality itself is playing a fundamental role in consciousness.

    But here’s the paradox: causality is not a physical thing. It’s a relationship between states, a purely structural property—yet it seems to be the deciding factor in whether an observer arises in A but not in B.

    So does consciousness require causal closure, the way a proof requires logical coherence? And if so, is consciousness more like a mathematical structure than a purely computational one?

    Consider Computer C, which is like B, but with a modification:

    Both the random source (RS) and a memory unit (MEM) feed into an equivalence gate that checks if their outputs match.
    If they are identical, the gate forwards the value to the CPU; otherwise, it outputs 0.
    Since we’ve stipulated that the RS always produces the correct values, the CPU in C receives the same data it would have if it had come from MEM alone.

    Is C conscious like A, or does it lack consciousness like B?

    Now consider computer D where the outputs from A’s memory go into D’s CPU. Is D conscious like A?

    I have a constellation of related thoughts, ideas and questions for you if this line of reasoning is on the right track.

    Curious to hear your thoughts.

  4. Cameron F Avatar
    Cameron F

    Something else I’m unsure of, as it’s entirely metaphysical, is that perhaps reality is only mathematical at the bottom (MUH-like) and that although mathematics is ‘timeless’ it can be thought of as being generated.

    Let’s assume a world devoid of everything. Shouldn’t this imply it also lacks constraints on what’s allowed to occur – we said it is ‘true nothing,’ which to me implies it lacks the capacity to restrict spontaneous existence.

    Okay, following that, perhaps everything does ‘burst forth’ – the ‘raw material’ being equivalent to information separate from physical media. Since ‘everything’ gets generated, some forms will confer/resonate (make sense) with each other and form a continuous structure – this is our mathematics. All proofs, programs and statements build up from the mathematical raw material that was spontaneously generated.

    This could perhaps mean that there is raw material that doesn’t confer with our own mathematics. This is very speculative and almost certainly beyond experiment, but could there be ‘islands’ of self consistent mathematical structure that our mathematical lacks the material to construct? Are there other worlds like this, perhaps with the equivalent of observers – though what they would be like is almost certainly beyond contemplation?

    I termed these islands of self consistent, closed off mathematics ‘mathoids’ and I was curious what you think about this?

    Unrelated to mathoids but related to my previous post here: I think brains construct a kind of computational time and space in our heads built from relationships (mathematical relationships and causality), not physical stuff. It seems like ‘mental time’ (there’s an internal sequence of thoughts) appears invariant to changes in speed. Is that nonsense? I can’t help but notice that my internal clock always ticks at the same rate – though the outside world can speed up or slow, I am consistent in the rate I experience myself. I can only be turned off or be brought into existence, but it doesn’t seem like I change speeds relative to myself and the outside world is hopeless in changing this speed. Is that circular or is that a true observation? Remember, I’m not talking about external time that can change rate as in time dilation, I’m talking about how our experience of persistence always ticks at the same rate. It might seem like the external world passes quicker or slower based on our mood/other factors, but the rate I tick seems immutable.

    I’m excited to hear your interpretation/opinion if there’s one to be made.

    Thank you!
    Cam

  5. Cameron F Avatar
    Cameron F

    Hi Michael,
    Last one, sorry…

    In my code trace thought experiment, we assume a conscious computer and let it run, with an observer presumably perceiving its forward progression. Now, imagine we record all its operations and, at time tk, we undo them precisely, effectively reversing its time.

    Wouldn’t it still perceive its internal time as moving forward? Since this reversal is equivalent to a reverse proof, its internal experience of time should remain forward and invariant.

    What are your thoughts?

Leave a Reply to Mike Levin Cancel reply

Your email address will not be published. Required fields are marked *