Ambience
In the last chapter, we have outlined how difficult it is to come up with a concise and general definition of what it means to do science, or to assess the quality of some knowledge claim and to decide whether it is scientific or not. The boundaries are blurred and constantly shifting. But even though they may be context-dependent and historically contingent, they are not arbitrary. Most of the time, we can tell what is more and what is less scientific in a given situation — even if we occasionally spark a controversy.
This much we can say with confidence: science is a skillful modelling activity that is firmly grounded in the way any evolved organism — from humble bacterium to sophisticated human investigator — comes to know the world. It is distinguished by specific standards and rules (such as the scientist’s commitment to impartiality, scepticism, and naturalism), plus the fact that scientific knowledge claims are particularly robust, and useful, because they enable us to act more coherently in our explorations of our large world than any other way of knowing we have come up with so far. In short: it is the best approach we have for structuring our nomological order. Doing science may be an art, it’s true, but it is one of the highest and most refined art forms humanity has ever developed. It’s essential for our grip on reality, for making the right decisions. Any civilised person or society will cherish and respect it for these exact reasons.
Having laid this foundation, we are going to do something slightly unexpected, maybe even a bit twisted, in the next few chapters: we’ll construct a formal model of scientific knowledge production! In fact, it is the most general formal model of science we can come up with. And, in the spirit of naturalism, it relies on the most advanced scientific and mathematical insights and methods available today. But don’t worry. This won’t involve any equations. It won’t require you to be particularly good or knowledgeable at maths. We’ll present this model in an accessible and informal way. But know that what we discuss here is based on rigorous formal reasoning. We will provide the appropriate references as we go along.
“Hypocrites!” you’re probably thinking. We’ve been preaching, across nine seemingly neverending chapters, that science (and the large world it is investigating) cannot be completely formalised. Why would we now suddenly switch sides and treat science as a formal activity? What on earth is going on here? We have some explaining to do, we know. And we will explain.
There are two basic ideas at work. The first is to take the formal model we are about to develop and push it to its limits, until it breaks. In this way, we use the model to examine what it does not and maybe cannot capture. This will help us to better recognise the current limitations of our theory of knowledge, of mathematical logic, of empirical investigation, and thus of science more generally.
Once more: our motivation is not to be fatalistic about this. We don’t mean to throw our hands in the air and say that there is no point in further exploration. On the contrary, we are pretty confident that scientific investigation will never cease. What we do want to achieve, though, is to improve the quality of our explorations — to make them more productive and sustainable. We do this by (re)learning to see what limits us. The boundaries of our knowledge can always be pushed. But to push them, we have to first recognise them. And how to push them is a really tricky question. We think that there are many more bad options for us right now than there are good ones. So we must tread carefully.
The second idea is that a formal model allows us to make our often vague and abstract concepts explicit, to render them as precise and concrete as we possibly can. There are many fuzzy verbal arguments in philosophy and science today that create controversy because it is easy to misunderstand the concepts and theories they are based on. There are many ways by which well-informed scientific disagreements can be constructive, but this is not one of them. It only results in us talking past each other, which is one thing we’d really rather avoid.
This means that we use our formal model not to predict anything, but to better understand how our explorations of the world actually work. We’ll turn the model not just to the outside, but also on ourselves. The two go together nicely. This, in turn, allows us to explore the conceptual space we have constructed to understand the world, to shine a reflecting light on the knowledge we have already built.
As already mentioned several times, our journey must begin with personal experience — ours, and that of our peers and ancestors. It is the ultimate (and only) source of new knowledge about the world. But what does it mean to be a person who experiences? Primarily, it means that we need to make a basic distinction, between self and non-self, between subject and object, between ourselves and a world “out there,” in order to investigate anything. Following Robert Rosen, we will call the latter the ambience. It has no positive definition, but simply includes everything that is not us. When doing science, our aim is to explain and understand the ambience, to gain robust knowledge of it that allows us to act coherently in our large world. For this, there must be a someone who understands, explains, and acts. And that someone must be a (usually tiny) part of their world, a part we can define and delimit in a specific way.
Rosen calls this distinction the first dualism, but we don’t like this expression, because what we are doing here is not the same as the dualism of Descartes. Instead, we will use a term introduced by Howard Pattee: the distinction between self and non-self is an epistemic cut — in fact, it is the primaeval epistemic cut. This cut is a special case of the fundamental problem of knowledge that we have already encountered — the gap that separates knower and known, observer and observed, controller and controlled. In physics, it manifests as the fundamental problem of measurement. It is an epistemic necessity: some part of the universe has to do the measuring. We may describe the measuring device as a physical object itself, but then we lose the content of the measurement. Again: science aims not only to generate knowledge, but to explain and understand the world. Without someone to explain or understand there can be nothing to know.
And the other way around: without a reality external to ourselves, whose existence precedes our existence and goes beyond our control, there can be nothing to observe, nothing to measure, nothing to know about. Few solipsists become good scientists. Scientific research is (by definition) a third-person perspective on the world, concerned with the other, with structuring our nomological order. It should not (and cannot) replace the complementary second- and first-person perspectives of our normative and narrative orders. If we miss this point, we commit the folly of scientism, or scientific overreach.
Now, one thing is important to put straight at the outset: when we distinguish self from ambience, we are not separating the two. They are intricately linked, intimately commingled even. For one, the self can only exist (and be defined) in relation to the ambience (and the other way around). Moreover, there is constant interchange of matter, energy, and information between the two. We explained earlier that all our experiences are transjective, arising from exchanges between subject and object. And we outlined in the previous chapter how these encounters shape and transform what happens within our horizon. Thus, when we distinguish self from ambience, we do so knowing that both are constantly co-arising from their mutual interrelations, and not independent entities, divided by a strict and static boundary.
This also means that the self is not a thing. Instead, it is a self-manufacturing process, deeply embedded in its ambience. David Hume already puzzled over the fact that you are no longer who you were five minutes or ten years ago. Concluding from this that there is no self, however, is a logical fallacy that stems from considering the problem in terms that are too static. The self does exist: there is a causal continuity between you and that earlier self that has its own, very distinctive, dynamics. Your self has been constructing and maintaining itself all that time. It does this every moment of your life, automatically, unconsciously, even when you are not paying attention, when you are asleep (or in a coma). To be alive, to be an organism, to be an individual, to be a self, means continuously manufacturing yourself.
This is quite a burden, if you think about it. What distinguishes you from a rock is that the rock persists by doing absolutely nothing, while you have to constantly invest physical work just to stay alive. And it is a problem for the machine thinker, because “manufacturing yourself” seems to go against Aristotle’s (and Newton’s) strict prohibition of circular reasoning and causation. We’ll spend most of the rest of the book clarifying how all this works, and how it is not really a problem for the committed naturalist.
What is important for now is that the primaeval epistemic cut introduces a perspective or window on the world. The self not only exists, but it is essential for making sense of anything: we must have a locus from which to experience the world. Without this kind of situatedness, without perspective, there is no knower, no observer, nobody to measure anything. It makes no sense to talk about truly objective knowledge — a true view from nowhere. If there is no knower — or if the knower, like Laplace’s demon, is assumed to be supernatural, unreal, out of this world — there is no knowledge. Remember: processes in the pleroma just happen, without making any difference. Meaning is generated in the creatura — there must be a subject to whom differences make a difference. This is what necessitates the primaeval epistemic cut.
Formally, this cut is a logical disjunction in the form of an exclusive or, which implies that knowledge is fundamentally semantic. But what on earth does that mean? It means that knowledge is about the ambience, referring to what is other than the self from within the horizon of consciousness. Knowledge derives its meaning from how robustly it connects ourselves to reality, and how relatable, relevant, and useful it is to us. But: if you consider the whole world a mechanism, or a computational (algorithmic) process, there is no place for semantics, there can be no meaning at all. Underneath, it’s all just “microbangings” between fundamental particles, as James Ladyman calls it. These proceed according to a finite set of predefined formal rules, which leave no space for semantics. It’s syntax all the way down. The machine view kills meaning, it kills the knower, it kills life. It is not a human way of knowing.
Yet, there are plenty of scientists and philosophers out there today who are in complete denial of this basic problem. Since the self and the semantic nature of knowledge that springs from it do not fit their worldview, they declare them to be illusory. This is a true leap of faith. The question we’d like to ask here is really simple: if your worldview denies the existence of phenomena that are clearly part of your experience (and are also absolutely necessary for a meaningful life) do you choose to adjust your view? Or do you press the world into the corset of your inadequate ideas? It seems like a no-brainer to us, but you’d be surprised how many scientists and philosophers stubbornly insist on choosing the map over the territory. The only explanation we have for this strange behaviour is that we have come to live so deeply inside our map that we no longer see the territory. This is why we have to go back to the basics.
Affirming the disjunction between self and ambience is a required first step for such a reconsideration. And, despite seeming obvious, it leads us to a profound insight: all our models of the world, no matter how broad and fundamental, are formulated in terms of symbol systems that exist purely within the horizon of our self. We express our most general truths via very idiosyncratic semiotic vehicles. The epistemic cut highlights this fundamental gap. In the next two sections, we dive deeper into each side of this divide. This allows us, in the last section of this chapter, to find a crossable bridge between the two.
In the last chapter, we have outlined how difficult it is to come up with a concise and general definition of what it means to do science, or to assess the quality of some knowledge claim and to decide whether it is scientific or not. The boundaries are blurred and constantly shifting. But even though they may be context-dependent and historically contingent, they are not arbitrary. Most of the time, we can tell what is more and what is less scientific in a given situation — even if we occasionally spark a controversy.
This much we can say with confidence: science is a skillful modelling activity that is firmly grounded in the way any evolved organism — from humble bacterium to sophisticated human investigator — comes to know the world. It is distinguished by specific standards and rules (such as the scientist’s commitment to impartiality, scepticism, and naturalism), plus the fact that scientific knowledge claims are particularly robust, and useful, because they enable us to act more coherently in our explorations of our large world than any other way of knowing we have come up with so far. In short: it is the best approach we have for structuring our nomological order. Doing science may be an art, it’s true, but it is one of the highest and most refined art forms humanity has ever developed. It’s essential for our grip on reality, for making the right decisions. Any civilised person or society will cherish and respect it for these exact reasons.
Having laid this foundation, we are going to do something slightly unexpected, maybe even a bit twisted, in the next few chapters: we’ll construct a formal model of scientific knowledge production! In fact, it is the most general formal model of science we can come up with. And, in the spirit of naturalism, it relies on the most advanced scientific and mathematical insights and methods available today. But don’t worry. This won’t involve any equations. It won’t require you to be particularly good or knowledgeable at maths. We’ll present this model in an accessible and informal way. But know that what we discuss here is based on rigorous formal reasoning. We will provide the appropriate references as we go along.
“Hypocrites!” you’re probably thinking. We’ve been preaching, across nine seemingly neverending chapters, that science (and the large world it is investigating) cannot be completely formalised. Why would we now suddenly switch sides and treat science as a formal activity? What on earth is going on here? We have some explaining to do, we know. And we will explain.
There are two basic ideas at work. The first is to take the formal model we are about to develop and push it to its limits, until it breaks. In this way, we use the model to examine what it does not and maybe cannot capture. This will help us to better recognise the current limitations of our theory of knowledge, of mathematical logic, of empirical investigation, and thus of science more generally.
Once more: our motivation is not to be fatalistic about this. We don’t mean to throw our hands in the air and say that there is no point in further exploration. On the contrary, we are pretty confident that scientific investigation will never cease. What we do want to achieve, though, is to improve the quality of our explorations — to make them more productive and sustainable. We do this by (re)learning to see what limits us. The boundaries of our knowledge can always be pushed. But to push them, we have to first recognise them. And how to push them is a really tricky question. We think that there are many more bad options for us right now than there are good ones. So we must tread carefully.
The second idea is that a formal model allows us to make our often vague and abstract concepts explicit, to render them as precise and concrete as we possibly can. There are many fuzzy verbal arguments in philosophy and science today that create controversy because it is easy to misunderstand the concepts and theories they are based on. There are many ways by which well-informed scientific disagreements can be constructive, but this is not one of them. It only results in us talking past each other, which is one thing we’d really rather avoid.
This means that we use our formal model not to predict anything, but to better understand how our explorations of the world actually work. We’ll turn the model not just to the outside, but also on ourselves. The two go together nicely. This, in turn, allows us to explore the conceptual space we have constructed to understand the world, to shine a reflecting light on the knowledge we have already built.
As already mentioned several times, our journey must begin with personal experience — ours, and that of our peers and ancestors. It is the ultimate (and only) source of new knowledge about the world. But what does it mean to be a person who experiences? Primarily, it means that we need to make a basic distinction, between self and non-self, between subject and object, between ourselves and a world “out there,” in order to investigate anything. Following Robert Rosen, we will call the latter the ambience. It has no positive definition, but simply includes everything that is not us. When doing science, our aim is to explain and understand the ambience, to gain robust knowledge of it that allows us to act coherently in our large world. For this, there must be a someone who understands, explains, and acts. And that someone must be a (usually tiny) part of their world, a part we can define and delimit in a specific way.
Rosen calls this distinction the first dualism, but we don’t like this expression, because what we are doing here is not the same as the dualism of Descartes. Instead, we will use a term introduced by Howard Pattee: the distinction between self and non-self is an epistemic cut — in fact, it is the primaeval epistemic cut. This cut is a special case of the fundamental problem of knowledge that we have already encountered — the gap that separates knower and known, observer and observed, controller and controlled. In physics, it manifests as the fundamental problem of measurement. It is an epistemic necessity: some part of the universe has to do the measuring. We may describe the measuring device as a physical object itself, but then we lose the content of the measurement. Again: science aims not only to generate knowledge, but to explain and understand the world. Without someone to explain or understand there can be nothing to know.
And the other way around: without a reality external to ourselves, whose existence precedes our existence and goes beyond our control, there can be nothing to observe, nothing to measure, nothing to know about. Few solipsists become good scientists. Scientific research is (by definition) a third-person perspective on the world, concerned with the other, with structuring our nomological order. It should not (and cannot) replace the complementary second- and first-person perspectives of our normative and narrative orders. If we miss this point, we commit the folly of scientism, or scientific overreach.
Now, one thing is important to put straight at the outset: when we distinguish self from ambience, we are not separating the two. They are intricately linked, intimately commingled even. For one, the self can only exist (and be defined) in relation to the ambience (and the other way around). Moreover, there is constant interchange of matter, energy, and information between the two. We explained earlier that all our experiences are transjective, arising from exchanges between subject and object. And we outlined in the previous chapter how these encounters shape and transform what happens within our horizon. Thus, when we distinguish self from ambience, we do so knowing that both are constantly co-arising from their mutual interrelations, and not independent entities, divided by a strict and static boundary.
This also means that the self is not a thing. Instead, it is a self-manufacturing process, deeply embedded in its ambience. David Hume already puzzled over the fact that you are no longer who you were five minutes or ten years ago. Concluding from this that there is no self, however, is a logical fallacy that stems from considering the problem in terms that are too static. The self does exist: there is a causal continuity between you and that earlier self that has its own, very distinctive, dynamics. Your self has been constructing and maintaining itself all that time. It does this every moment of your life, automatically, unconsciously, even when you are not paying attention, when you are asleep (or in a coma). To be alive, to be an organism, to be an individual, to be a self, means continuously manufacturing yourself.
This is quite a burden, if you think about it. What distinguishes you from a rock is that the rock persists by doing absolutely nothing, while you have to constantly invest physical work just to stay alive. And it is a problem for the machine thinker, because “manufacturing yourself” seems to go against Aristotle’s (and Newton’s) strict prohibition of circular reasoning and causation. We’ll spend most of the rest of the book clarifying how all this works, and how it is not really a problem for the committed naturalist.
What is important for now is that the primaeval epistemic cut introduces a perspective or window on the world. The self not only exists, but it is essential for making sense of anything: we must have a locus from which to experience the world. Without this kind of situatedness, without perspective, there is no knower, no observer, nobody to measure anything. It makes no sense to talk about truly objective knowledge — a true view from nowhere. If there is no knower — or if the knower, like Laplace’s demon, is assumed to be supernatural, unreal, out of this world — there is no knowledge. Remember: processes in the pleroma just happen, without making any difference. Meaning is generated in the creatura — there must be a subject to whom differences make a difference. This is what necessitates the primaeval epistemic cut.
Formally, this cut is a logical disjunction in the form of an exclusive or, which implies that knowledge is fundamentally semantic. But what on earth does that mean? It means that knowledge is about the ambience, referring to what is other than the self from within the horizon of consciousness. Knowledge derives its meaning from how robustly it connects ourselves to reality, and how relatable, relevant, and useful it is to us. But: if you consider the whole world a mechanism, or a computational (algorithmic) process, there is no place for semantics, there can be no meaning at all. Underneath, it’s all just “microbangings” between fundamental particles, as James Ladyman calls it. These proceed according to a finite set of predefined formal rules, which leave no space for semantics. It’s syntax all the way down. The machine view kills meaning, it kills the knower, it kills life. It is not a human way of knowing.
Yet, there are plenty of scientists and philosophers out there today who are in complete denial of this basic problem. Since the self and the semantic nature of knowledge that springs from it do not fit their worldview, they declare them to be illusory. This is a true leap of faith. The question we’d like to ask here is really simple: if your worldview denies the existence of phenomena that are clearly part of your experience (and are also absolutely necessary for a meaningful life) do you choose to adjust your view? Or do you press the world into the corset of your inadequate ideas? It seems like a no-brainer to us, but you’d be surprised how many scientists and philosophers stubbornly insist on choosing the map over the territory. The only explanation we have for this strange behaviour is that we have come to live so deeply inside our map that we no longer see the territory. This is why we have to go back to the basics.
Affirming the disjunction between self and ambience is a required first step for such a reconsideration. And, despite seeming obvious, it leads us to a profound insight: all our models of the world, no matter how broad and fundamental, are formulated in terms of symbol systems that exist purely within the horizon of our self. We express our most general truths via very idiosyncratic semiotic vehicles. The epistemic cut highlights this fundamental gap. In the next two sections, we dive deeper into each side of this divide. This allows us, in the last section of this chapter, to find a crossable bridge between the two.
Symbols
When we are on the subjective self side of the primaeval epistemic cut, we are in the domain of symbolic systems. Evidently, this is where all our scientific and formal models are located, and it is in this domain that we draw inferences about the ambience from these models. But what exactly is this symbolic domain?
As we explain in the appendix, symbols or signs are signifiers. They are abstract entities of a fundamentally semantic nature. Symbols are about something that is signified. They refer to some discernible property, object, pattern, or process (called the referent) that either exists within the space of our personal experience, or outside our self, as part of our ambience. On top of this, symbols also relate to one another, according to some set of syntactic rules, which are independent of the way a signifier relates to its signified. In this sense, the association of symbols and objects is arbitrary: there is no intrinsic reason why a particular symbol stands for a given object. Instead, their association is contingent on the history and context of the symbol system. And it’s arbitrary in the sense that it may have well been otherwise.
The general study of symbolic systems is called semiotics. One of its central postulates is that symbols are exclusive to living systems, because only organisms are selves to whom something can be signified. In other words, symbols need an interpreter to make any sense. There we have the necessity for the primaeval epistemic cut again: there is no meaning outside the domain of the living. Symbols cannot exist in isolation, but only in the context of a semiotic system, which provides the interpretive structure required to decode from symbol to referent. Only living organisms are such systems, are selves.
We encountered such signifiers in the last chapter when we discussed how a bacterium goes after food sources, and avoids toxins. The molecules in its surroundings are physical symbol vehicles that carry meaning: food is good, toxins are bad. Evolved internal mechanisms then translate the received symbols into appropriate, that is, coherent behaviour which is conducive to keeping the bacterium alive. Another good example is the genetic code, which we discuss in more detail in the appendix, plus a variety of other codes the living cell employs not only to maintain, but to evolve its own organisation.
Even in non-human organisms, the association of an object or process with a symbol is an act of abstraction. It is rooted in relevance realisation: we must first pick out some feature of our experience as distinguishable and relevant — a difference that makes a difference to ourselves. Only then is there the possibility to assign a symbol to it. On the flipside, this always implies backgrounding or excluding a whole range of other (possibly also relevant) features. This is what abstraction means: to focus on what seems to matter most. And once we have figured out what that is, we’d also like to remember it.
This is why symbol systems and coding processes in living organisms also serve a central role as memory structures. Take, once more, the genetic code as an example: the genome is the part of a cell that allows it to store relevant past experiences in symbolic form, to reuse them in future generations through an appropriate form of (de)coding or interpretation. Unlike the interrelated dynamic processes that allow the cell to self-manufacture, these memory structures are rather inert. Considered like this, the genome is actually one of the cell’s more boring components: DNA, by itself, does nothing interesting at all. Only in the context of a cell capable of actively interpreting its information can it serve its purpose. Again: it is obvious that semantic information only exists in the context of a living semiotic system.
In contrast to most other organisms, the capacity for abstraction and symbolic reasoning is particularly well developed in humans. It facilitates and boosts our awareness of relevant features of our experience, and enables us to communicate them to one another in complex social situations. This opens the possibility for the kind of propositional knowledge that is so central to the human experience. Nobody knows exactly how or when the capacity for symbolic reasoning first arose in the genus Homo, but it is ancient for sure — originating some time between 2.5 million and 60,000 years ago. During the same period, human language emerged with its abstract representation of meaning through phonemes. It is derived from the simpler kind of gestural and vocal communication we surmise our Australopithecine ancestors possessed, and which we can still observe in many primate species today.
But the greatest leap of abstraction occurs with the invention of the first alphabets, about 6,000 to 5,000 before our current time. These systematise the use of symbols in language by defining sets of signs that stand for particular phonemes, syllables, words, or other semantic units. It is hard to underestimate how powerful an invention the alphabet really is. It enables the use of written language as a novel memory structure, faithfully storing our language-based communications outside our perishable (and often unreliable) selves. And even more importantly: it introduces a completely new form of mental representation of the world, which is discretisation. Only through the invention of the alphabet does the idea that reality consists of discrete chunks or units become thinkable in the first place.
Considered from a mathematical point of view, alphabets are powerful generators that produce structure “for free.” All that is needed is the automatic operation of concatenation, of putting two letters next to each other. A limited number of distinct letters can then be recombined to form an unlimited number of possible words. In more precise terms, the number of words we can form from a finite alphabet is countably infinite (if there is no limit to word length). Endless symbolic possibilities open up with very little effort. In fact, if we include a word separator plus a bunch of punctuation symbols in our alphabet, the resulting abstract structure contains not only all books that have ever been written, but also all that will ever be written, in that particular alphabet. Jorge Luis Borges explores this mind-boggling fact in his amazing short story “The Library of Babel.” Read it, if you haven’t done so!
Be that as it may, if we only consider the syntactic rules of how words are formed from letters, they generate an infinite but still small world, where every word has a definite and unique (and actually very simple) relation to the letters that generate it. Only through the semantic process of relevance realisation can meaning be assigned to some of these words. And, as Borges points out: these little islands of meaning will be infinitely small and isolated in the infinite sea of meaningless symbol combinations. Without referents, there are nothing but endless abstract patterns without any content. And the assignment of meaning is almost always ambiguous, idealised, and underdetermined. This is how the messy large world “out there” creeps into the syntactic simplicity of the symbolic domain.
What we see at work here is a robust motif that we’ll encounter over and over again: by constraining the degrees of freedom in a symbolic system, we actually increase its expressive power. And by expressing our ideas in terms of a systematically constrained symbolic system, we make them stand out more clearly, we render them more graspable — for others to understand and internalise. And also: we make our abstractions from the world communicable in a much more precise manner. All of this would not be possible without a discrete alphabet. This is the true power of discretisation.
And if we take this abstraction process one step further, we end up with numbers and formal symbolic systems, which are specialised alphabets that we use to quantify features of our arena, and to relate the resulting quantities with each other in an exact and explicit manner. This is what computation is based on, and what Turing’s theory of computation is about. But recall that it is not a feature of the ambience. The world does not compute. Computation is how we humans abstract reality to reason about our particular ambience in a precise and rigorous manner. The symbols that Turing machine (or their real-world approximations) process are utterly meaningless until a human user of the machine associates them with referents, equipping them with semantic content. There is no meaning in a small symbolic world.
So the real trick is not to come up with rigorous formal symbolic systems, but to connect them in a robust and coherent way to relevant features of our ambience, i.e., our arena. As limited beings living in a large world, we are confronted with two extremely daunting tasks: first, we must must assign meaning (i.e., referents) to meaningless syntactic patterns and, second, we must find bridges between the isolated semantic islands we have created to be able to relate and located them within Borges’ infinite library. In order to best structure our nomological order, science strives to interconnect our semantic contacts with the ambience in a matter that is both as systematic and as generalisable as possible.
This neatly encapsulates the central challenges for any skillful scientific modelling practice, seen from a perspectivist point of view. The art of modelling lies in first picking out the relevant features of a phenomenon to focus on, and then building (formal, verbal, or even physical) models that we can use as inferential blueprints, to use philosopher Michela Massimi’s term. Again, the trick is not to come up with the model that most faithfully represents and reproduces all the details of a phenomenon, but to generate a model that is useful to draw the largest number of robust, relevant, and general inferences possible about the phenomenon to be understood.
In this way, we can consider models to be symbolic tools. As philosopher Tarja Knuuttila has suggested, such tools are abstract epistemic artefacts — which we construct within the horizon of our consciousness (or, in the case of scale models, in the physical world) in order to understand or predict the behaviour of a real-world phenomenon in our ambience. The metaphor of models as maps can be taken quite literally here: the true value of an actual map does not necessarily lie in how much detail of the territory it includes (as Borges already reminded us earlier on), but in how practical it is for a given purpose.
Once again, we realise that we are faced with an utterly quixotic and paradoxical task: we want models that are as powerful, and as general as possible, but we are forced to formulate them in symbolic alphabets that are completely contingent on our history and nature as limited human beings. There is no way around it: we made these symbols up, and we defined their relations in ways that are convenient and useful to us. There is no (and there won’t be) a guarantee that such symbols can represent the ambience “out there” in a faithful and truly objective manner. It can’t be done. And it misses the point. In the symbolic domain, it is the inferential connections, not precise representation, that count. They pick out meaningful patterns in the world because we have constructed them for exactly this purpose.
Yet again, this is a strange loop: science means to construct tools that allow us to build even better tools. They are built on top of each other. This is how any kind of knowledge production works: it is a social construction. But if it’s well done, it not only connects robustly to the ambience, here or there, but also lets us draw useful inferences between these connections. In this way, we weave an adaptive symbolic web that captures what is relevant to us in the ambience. This web is loaded with semantics. Trying to understand it by looking at purely syntactic informational patterns misses its proper meaning.
The world does not compute. And if we are quite strict: neither does a computer! At least not on its own: we use computers to calculate solutions to our problems. Yes, computers are symbolic systems. But without a human user they lack purpose, their calculations have no meaning. Only semiotic interpretive systems can generate such meanings, and those systems are alive. This is why, if we consider the world a computer, we quite literally lose touch with reality. And this is the ultimate definition of madness.
When we are on the subjective self side of the primaeval epistemic cut, we are in the domain of symbolic systems. Evidently, this is where all our scientific and formal models are located, and it is in this domain that we draw inferences about the ambience from these models. But what exactly is this symbolic domain?
As we explain in the appendix, symbols or signs are signifiers. They are abstract entities of a fundamentally semantic nature. Symbols are about something that is signified. They refer to some discernible property, object, pattern, or process (called the referent) that either exists within the space of our personal experience, or outside our self, as part of our ambience. On top of this, symbols also relate to one another, according to some set of syntactic rules, which are independent of the way a signifier relates to its signified. In this sense, the association of symbols and objects is arbitrary: there is no intrinsic reason why a particular symbol stands for a given object. Instead, their association is contingent on the history and context of the symbol system. And it’s arbitrary in the sense that it may have well been otherwise.
The general study of symbolic systems is called semiotics. One of its central postulates is that symbols are exclusive to living systems, because only organisms are selves to whom something can be signified. In other words, symbols need an interpreter to make any sense. There we have the necessity for the primaeval epistemic cut again: there is no meaning outside the domain of the living. Symbols cannot exist in isolation, but only in the context of a semiotic system, which provides the interpretive structure required to decode from symbol to referent. Only living organisms are such systems, are selves.
We encountered such signifiers in the last chapter when we discussed how a bacterium goes after food sources, and avoids toxins. The molecules in its surroundings are physical symbol vehicles that carry meaning: food is good, toxins are bad. Evolved internal mechanisms then translate the received symbols into appropriate, that is, coherent behaviour which is conducive to keeping the bacterium alive. Another good example is the genetic code, which we discuss in more detail in the appendix, plus a variety of other codes the living cell employs not only to maintain, but to evolve its own organisation.
Even in non-human organisms, the association of an object or process with a symbol is an act of abstraction. It is rooted in relevance realisation: we must first pick out some feature of our experience as distinguishable and relevant — a difference that makes a difference to ourselves. Only then is there the possibility to assign a symbol to it. On the flipside, this always implies backgrounding or excluding a whole range of other (possibly also relevant) features. This is what abstraction means: to focus on what seems to matter most. And once we have figured out what that is, we’d also like to remember it.
This is why symbol systems and coding processes in living organisms also serve a central role as memory structures. Take, once more, the genetic code as an example: the genome is the part of a cell that allows it to store relevant past experiences in symbolic form, to reuse them in future generations through an appropriate form of (de)coding or interpretation. Unlike the interrelated dynamic processes that allow the cell to self-manufacture, these memory structures are rather inert. Considered like this, the genome is actually one of the cell’s more boring components: DNA, by itself, does nothing interesting at all. Only in the context of a cell capable of actively interpreting its information can it serve its purpose. Again: it is obvious that semantic information only exists in the context of a living semiotic system.
In contrast to most other organisms, the capacity for abstraction and symbolic reasoning is particularly well developed in humans. It facilitates and boosts our awareness of relevant features of our experience, and enables us to communicate them to one another in complex social situations. This opens the possibility for the kind of propositional knowledge that is so central to the human experience. Nobody knows exactly how or when the capacity for symbolic reasoning first arose in the genus Homo, but it is ancient for sure — originating some time between 2.5 million and 60,000 years ago. During the same period, human language emerged with its abstract representation of meaning through phonemes. It is derived from the simpler kind of gestural and vocal communication we surmise our Australopithecine ancestors possessed, and which we can still observe in many primate species today.
But the greatest leap of abstraction occurs with the invention of the first alphabets, about 6,000 to 5,000 before our current time. These systematise the use of symbols in language by defining sets of signs that stand for particular phonemes, syllables, words, or other semantic units. It is hard to underestimate how powerful an invention the alphabet really is. It enables the use of written language as a novel memory structure, faithfully storing our language-based communications outside our perishable (and often unreliable) selves. And even more importantly: it introduces a completely new form of mental representation of the world, which is discretisation. Only through the invention of the alphabet does the idea that reality consists of discrete chunks or units become thinkable in the first place.
Considered from a mathematical point of view, alphabets are powerful generators that produce structure “for free.” All that is needed is the automatic operation of concatenation, of putting two letters next to each other. A limited number of distinct letters can then be recombined to form an unlimited number of possible words. In more precise terms, the number of words we can form from a finite alphabet is countably infinite (if there is no limit to word length). Endless symbolic possibilities open up with very little effort. In fact, if we include a word separator plus a bunch of punctuation symbols in our alphabet, the resulting abstract structure contains not only all books that have ever been written, but also all that will ever be written, in that particular alphabet. Jorge Luis Borges explores this mind-boggling fact in his amazing short story “The Library of Babel.” Read it, if you haven’t done so!
Be that as it may, if we only consider the syntactic rules of how words are formed from letters, they generate an infinite but still small world, where every word has a definite and unique (and actually very simple) relation to the letters that generate it. Only through the semantic process of relevance realisation can meaning be assigned to some of these words. And, as Borges points out: these little islands of meaning will be infinitely small and isolated in the infinite sea of meaningless symbol combinations. Without referents, there are nothing but endless abstract patterns without any content. And the assignment of meaning is almost always ambiguous, idealised, and underdetermined. This is how the messy large world “out there” creeps into the syntactic simplicity of the symbolic domain.
What we see at work here is a robust motif that we’ll encounter over and over again: by constraining the degrees of freedom in a symbolic system, we actually increase its expressive power. And by expressing our ideas in terms of a systematically constrained symbolic system, we make them stand out more clearly, we render them more graspable — for others to understand and internalise. And also: we make our abstractions from the world communicable in a much more precise manner. All of this would not be possible without a discrete alphabet. This is the true power of discretisation.
And if we take this abstraction process one step further, we end up with numbers and formal symbolic systems, which are specialised alphabets that we use to quantify features of our arena, and to relate the resulting quantities with each other in an exact and explicit manner. This is what computation is based on, and what Turing’s theory of computation is about. But recall that it is not a feature of the ambience. The world does not compute. Computation is how we humans abstract reality to reason about our particular ambience in a precise and rigorous manner. The symbols that Turing machine (or their real-world approximations) process are utterly meaningless until a human user of the machine associates them with referents, equipping them with semantic content. There is no meaning in a small symbolic world.
So the real trick is not to come up with rigorous formal symbolic systems, but to connect them in a robust and coherent way to relevant features of our ambience, i.e., our arena. As limited beings living in a large world, we are confronted with two extremely daunting tasks: first, we must must assign meaning (i.e., referents) to meaningless syntactic patterns and, second, we must find bridges between the isolated semantic islands we have created to be able to relate and located them within Borges’ infinite library. In order to best structure our nomological order, science strives to interconnect our semantic contacts with the ambience in a matter that is both as systematic and as generalisable as possible.
This neatly encapsulates the central challenges for any skillful scientific modelling practice, seen from a perspectivist point of view. The art of modelling lies in first picking out the relevant features of a phenomenon to focus on, and then building (formal, verbal, or even physical) models that we can use as inferential blueprints, to use philosopher Michela Massimi’s term. Again, the trick is not to come up with the model that most faithfully represents and reproduces all the details of a phenomenon, but to generate a model that is useful to draw the largest number of robust, relevant, and general inferences possible about the phenomenon to be understood.
In this way, we can consider models to be symbolic tools. As philosopher Tarja Knuuttila has suggested, such tools are abstract epistemic artefacts — which we construct within the horizon of our consciousness (or, in the case of scale models, in the physical world) in order to understand or predict the behaviour of a real-world phenomenon in our ambience. The metaphor of models as maps can be taken quite literally here: the true value of an actual map does not necessarily lie in how much detail of the territory it includes (as Borges already reminded us earlier on), but in how practical it is for a given purpose.
Once again, we realise that we are faced with an utterly quixotic and paradoxical task: we want models that are as powerful, and as general as possible, but we are forced to formulate them in symbolic alphabets that are completely contingent on our history and nature as limited human beings. There is no way around it: we made these symbols up, and we defined their relations in ways that are convenient and useful to us. There is no (and there won’t be) a guarantee that such symbols can represent the ambience “out there” in a faithful and truly objective manner. It can’t be done. And it misses the point. In the symbolic domain, it is the inferential connections, not precise representation, that count. They pick out meaningful patterns in the world because we have constructed them for exactly this purpose.
Yet again, this is a strange loop: science means to construct tools that allow us to build even better tools. They are built on top of each other. This is how any kind of knowledge production works: it is a social construction. But if it’s well done, it not only connects robustly to the ambience, here or there, but also lets us draw useful inferences between these connections. In this way, we weave an adaptive symbolic web that captures what is relevant to us in the ambience. This web is loaded with semantics. Trying to understand it by looking at purely syntactic informational patterns misses its proper meaning.
The world does not compute. And if we are quite strict: neither does a computer! At least not on its own: we use computers to calculate solutions to our problems. Yes, computers are symbolic systems. But without a human user they lack purpose, their calculations have no meaning. Only semiotic interpretive systems can generate such meanings, and those systems are alive. This is why, if we consider the world a computer, we quite literally lose touch with reality. And this is the ultimate definition of madness.
Physics
When we are on the ambience side of the primaeval epistemic cut, we are in the domain of natural systems. But what does this mean if we no longer pretend to have privileged access to an objective reality through the scientific method? To briefly recap: our reality — as captured by the concept of the ambience — is that which affects our lives but lies beyond our selves, beyond our control over our actions and the mind-framed concepts and models we create when we construct (scientific) knowledge.
The ambience is the domain of physics. In its original definition, by Aristotle, physics is the science that encompasses the study of all natural systems, that is, any phenomenon that we can pick out as relevant from the ambience. But therein lies the problem: what the heck is that supposed to mean?
We’ve already discussed how difficult it is to pin down exactly what is “natural.” What philosophers and scientists consider natural has changed repeatedly and radically during human history. The same applies to physics: Newton gave it a much more narrow definition than Aristotle did. Classical mechanics only deals with a very specialised class of natural systems: mechanisms described in terms of simple equations of motion derived from Newton’s laws. It is not a science of all natural systems. Not even close.
The methodological and theoretical apparatus of physics was subsequently extended, repeatedly, to deal with phenomena that do not fit Newton’s narrow bill. Newton himself pushed the boundaries through his studies of optics, which led to the science of electromagnetic fields more generally. Then there are the macroscopic laws of thermodynamics, fluid dynamics and condensed-matter physics. And, of course, the twin revolutions of quantum mechanics and relativity. Last but not least, we have open systems and far-from equilibrium thermodynamics, as well as complexity science, as the latest additions. The latter go hand in hand to lift physics out of its idealised “box” of describing simple isolated systems at equilibrium, when we all know that no real-world system in the universe is truly like that.
So, there are clearly two very different ways in which we can talk about the domain of physics. The Aristotelian way leads us to equate naturalism with physicalism: by definition, all natural systems are physical systems. On this view, physics includes everything that is in the ambience, including biological and social phenomena. But this easily leads to confusion. Despite all its extensions, contemporary physics is still a much more narrow field of study than envisioned by Aristotle. And, as we have seen, its remit constantly changes and is contingent on the history of the field. It certainly does not include all the natural phenomena we encounter in our ambience. In this sense, it makes no sense to claim “all systems are physical systems.” Yet, this is exactly what many reductionist scientists and philosophers do.
The idea is that all phenomena in the ambience can be reduced to physical explanations — usually phrased in terms of the behaviour of some fundamental fields or particles. Those extensions of physics listed above that concern higher levels of organisation? They are just temporary deviations from the reductionist ideal, until we manage to bring everything down to the basic level. You, for example, are nothing but the collective movement of an astronomically large number of atoms, bound together in molecules as described by quantum mechanical laws, and some day in the future we will be able to describe an entire organism like yourself that way. This reductionist strategy is called “bottoming out."
Even though this approach makes intuitive sense to us modern human beings, it is actually quite weird, if you think about it from a perspectivist or pragmatist point of view. It amounts to the claim that all valid scientific explanations must ultimately reduce to a single model of the world, a single set of unifying laws, the elusive theory of everything that some physicists are still pursuing today.
This is problematic in two main ways. First of all, why should we assume that science is headed towards unification? The historical expansions of physics we have just outlined seem to indicate exactly the opposite. Despite some notable exceptions, such as the successful derivation of thermodynamics from statistical mechanics, we keep adding theories, models, and concepts to our physics toolbox, instead of reducing them to an underlying core. This is to be expected if we consider science a skillful modelling practice: as scientists encounter new problems to solve, and new phenomena to explain, they will keep on coming up with new theories, models, and concepts that improve our understanding and predictive capabilities in particular domains. On this view, you’d expect diversification, not reduction.
And second, there is absolutely no reason to assume that today’s incarnation of the fundamental laws of physics are even close to a theory of everything. It is a well known fact that quantum mechanics and relativity physics are incompatible with each other. In fact, the current quest for a unified theory consists primarily of various attempts to bring the two together. Even if this succeeds, however, it is unlikely that we will have arrived at the ultimate theory of the universe. Laws are just models, as Ronald Giere pointed out. And as we’ve argued before, the evolution of our scientific models will never cease. New phenomena and new ways to organise matter constantly emerge. The theories of physics we have at any time in history are likely just snapshots of the continuing growth and transformation of the field.
So why not embrace the claim that we need to further expand (not reduce) the apparatus of current physics if we are to better understand biological and social phenomena? This is an argument that Robert Rosen, Howard Pattee and, more recently, Alicia Juarrero and Terrence Deacon have made. The basic idea is this: almost all approaches in physics today still follow Newton’s basic pattern of dividing nature into independent initial and boundary conditions — what defines the “box” around a model — and the underlying laws of motion. But this is just another epistemic cut! Ever since Newton, physics has chosen to focus primarily on the laws, while backgrounding the boundaries of a system under study. Yet, to get an integrated understanding of our world and our relation to it, we need to bridge that cut.
The problem is: initial and boundary conditions do not generally behave in a lawlike manner. Instead, they must be established through measurement, which takes us back to the “self” side of the epistemic cut: there must be someone who measures, and hence is not part of what is being measured. This problem is most evident in quantum mechanics, where measurement is directly involved (at least in some interpretations) in the collapse of the wave equation. But the problem applies more generally: the necessity for a measurer (and her measurement device) always leaves a chunk of reality undetermined. Plus: what to measure and how is not given by the underlying laws. Instead, it depends on how we draw the boundaries around the box. But this is the problem of relevance, which has no formal solution. It is not a task of problem solving, of computation, but one of problem framing, of judgement, of formalisation itself.
The problem is that this subjective factor is not eliminable: if we include the measurer in our object of study, we lose the measurement. In fact, by redrawing the boundaries of our system in this way, we just create the need for another measurer outside the newly drawn boundaries! It’s measurers all the way down — an infinite regress. This is a bit embarrassing for physicists who strive for pure objectivity. And it is precisely the reason why they have largely ignored boundaries for most of the field’s history: initial and boundary conditions are simply considered to be given. They are there when the activity of the physicist begins. Physics is what happens within such boundaries. Lee Smolin calls this “physics in a box.” And none of the extensions we have outlined above take us beyond this fundamental limitation.
But here is the crux of the matter: when it comes to biological and social systems, boundaries are pretty much all that matters. Let’s call them constraints. They delimit the many possible ways in which different processes interact within a system. These interactions restrict and channel the dynamic behaviour of the underlying chemical or physical processes in a top-down manner. While the dynamics obviously remain subject to the laws of physics, constraints change over time in ways that are not lawlike, but radically contingent on the history and context of the particular interactions they delimit. For this reason, Pattee calls them nonholonomic (“not lawlike”) or nonintegrable (as in integrating a mathematical function), while Juarrero calls them context-dependent. Constraint evolution is just one damn thing after another.
One of the most obvious examples of nonholonomic constraints at work is Darwinian evolution by natural selection. It occurs when there is a population of individual organisms with heritable variation in their behaviour or appearance (their phenotype, as biologists call it). If this variation affects the likelihood and rate of reproduction, or relative fitness of an individual (compared to others) in a given environment, we get adaptation. Environment and organism constrain each other in ways that are radically dependent on the context provided by both. Selective pressures from the environment cull individuals that are maladapted. Organsmic behaviour, in turn, affects these selective pressures because the organism seeks out opportunities and avoids obstacles in its arena. As Richard Lewontin put it: the organism is both the subject and object of evolution. And here it is: organisms are systems that bridge epistemic cuts.
Which brings us straight back to our earlier discussion of (computational) complexity: the individual constraints of organismic behaviour and the population-level ones of natural selection are not random (as so many people falsely assume). They follow a recognisable logic (otherwise there would be no theory of evolution), but they are also not lawlike. How organism and environment influence each other is a co-constructive process that is fundamentally unpredictable. We’ll come back to that. As a first step, let’s just say that the adaptive dynamics generated by a population of individuals acting out their lives in an ever-changing environment are computationally irreducible. It’s a bit like predicting the weather: our ability to compress them into laws and predictions is severely limited to very local and peculiar contexts.
The challenge is now to establish how these irreducible constraint dynamics can be grounded in the underlying laws of physics. Against popular belief, especially among biologists, this is not something we have achieved already in any satisfactory manner. It is true, the theory of evolution does not obviously contradict any laws of physics. But most of our current models of evolution are not properly grounded in physics either. Take replicator theory, for example. As (in)famously popularised by Richard Dawkins’ “Selfish Gene,” it assumes that the fundamental unit of evolution is a replicator, most commonly assumed to be a gene encoded in DNA. There is only one problem: replicators don’t replicate outside a living cell! Naked-replicator evolution is astronomically unlikely, according to the laws of thermodynamics.
Howard Pattee calls these kinds of models of biological evolution physics-free. They are very common. In fact, there is an entire cottage industry (called artificial life) that produces them, and consistently fails to capture most of the interesting features of real-world biological evolution. This failure is as well-known as it is illuminating: it indicates that computational models of evolution are missing essential ingredients. One of them is a realistic model of the organism (more on that later). The other is the open-endedness that springs from the behaviour of that agent. Computationalism kills creativity. Who’d have thought.
Instead of more useless computer simulations, we need what Pattee calls a physics of symbols. We need to extend physics, yet again, to bridge the epistemic gap. As we have already seen, symbolic systems are always embedded in the physical world. They need to operate in accordance with the laws of physics, but are not determined by them. To observe, to encounter, to measure the ambience is to turn physics into symbols in unpredictable ways. To exert control, to act, is to turn symbols back into physics. When both of these processes are integrated, as tightly as possible, we get a living system with semiotic closure.
When we are on the ambience side of the primaeval epistemic cut, we are in the domain of natural systems. But what does this mean if we no longer pretend to have privileged access to an objective reality through the scientific method? To briefly recap: our reality — as captured by the concept of the ambience — is that which affects our lives but lies beyond our selves, beyond our control over our actions and the mind-framed concepts and models we create when we construct (scientific) knowledge.
The ambience is the domain of physics. In its original definition, by Aristotle, physics is the science that encompasses the study of all natural systems, that is, any phenomenon that we can pick out as relevant from the ambience. But therein lies the problem: what the heck is that supposed to mean?
We’ve already discussed how difficult it is to pin down exactly what is “natural.” What philosophers and scientists consider natural has changed repeatedly and radically during human history. The same applies to physics: Newton gave it a much more narrow definition than Aristotle did. Classical mechanics only deals with a very specialised class of natural systems: mechanisms described in terms of simple equations of motion derived from Newton’s laws. It is not a science of all natural systems. Not even close.
The methodological and theoretical apparatus of physics was subsequently extended, repeatedly, to deal with phenomena that do not fit Newton’s narrow bill. Newton himself pushed the boundaries through his studies of optics, which led to the science of electromagnetic fields more generally. Then there are the macroscopic laws of thermodynamics, fluid dynamics and condensed-matter physics. And, of course, the twin revolutions of quantum mechanics and relativity. Last but not least, we have open systems and far-from equilibrium thermodynamics, as well as complexity science, as the latest additions. The latter go hand in hand to lift physics out of its idealised “box” of describing simple isolated systems at equilibrium, when we all know that no real-world system in the universe is truly like that.
So, there are clearly two very different ways in which we can talk about the domain of physics. The Aristotelian way leads us to equate naturalism with physicalism: by definition, all natural systems are physical systems. On this view, physics includes everything that is in the ambience, including biological and social phenomena. But this easily leads to confusion. Despite all its extensions, contemporary physics is still a much more narrow field of study than envisioned by Aristotle. And, as we have seen, its remit constantly changes and is contingent on the history of the field. It certainly does not include all the natural phenomena we encounter in our ambience. In this sense, it makes no sense to claim “all systems are physical systems.” Yet, this is exactly what many reductionist scientists and philosophers do.
The idea is that all phenomena in the ambience can be reduced to physical explanations — usually phrased in terms of the behaviour of some fundamental fields or particles. Those extensions of physics listed above that concern higher levels of organisation? They are just temporary deviations from the reductionist ideal, until we manage to bring everything down to the basic level. You, for example, are nothing but the collective movement of an astronomically large number of atoms, bound together in molecules as described by quantum mechanical laws, and some day in the future we will be able to describe an entire organism like yourself that way. This reductionist strategy is called “bottoming out."
Even though this approach makes intuitive sense to us modern human beings, it is actually quite weird, if you think about it from a perspectivist or pragmatist point of view. It amounts to the claim that all valid scientific explanations must ultimately reduce to a single model of the world, a single set of unifying laws, the elusive theory of everything that some physicists are still pursuing today.
This is problematic in two main ways. First of all, why should we assume that science is headed towards unification? The historical expansions of physics we have just outlined seem to indicate exactly the opposite. Despite some notable exceptions, such as the successful derivation of thermodynamics from statistical mechanics, we keep adding theories, models, and concepts to our physics toolbox, instead of reducing them to an underlying core. This is to be expected if we consider science a skillful modelling practice: as scientists encounter new problems to solve, and new phenomena to explain, they will keep on coming up with new theories, models, and concepts that improve our understanding and predictive capabilities in particular domains. On this view, you’d expect diversification, not reduction.
And second, there is absolutely no reason to assume that today’s incarnation of the fundamental laws of physics are even close to a theory of everything. It is a well known fact that quantum mechanics and relativity physics are incompatible with each other. In fact, the current quest for a unified theory consists primarily of various attempts to bring the two together. Even if this succeeds, however, it is unlikely that we will have arrived at the ultimate theory of the universe. Laws are just models, as Ronald Giere pointed out. And as we’ve argued before, the evolution of our scientific models will never cease. New phenomena and new ways to organise matter constantly emerge. The theories of physics we have at any time in history are likely just snapshots of the continuing growth and transformation of the field.
So why not embrace the claim that we need to further expand (not reduce) the apparatus of current physics if we are to better understand biological and social phenomena? This is an argument that Robert Rosen, Howard Pattee and, more recently, Alicia Juarrero and Terrence Deacon have made. The basic idea is this: almost all approaches in physics today still follow Newton’s basic pattern of dividing nature into independent initial and boundary conditions — what defines the “box” around a model — and the underlying laws of motion. But this is just another epistemic cut! Ever since Newton, physics has chosen to focus primarily on the laws, while backgrounding the boundaries of a system under study. Yet, to get an integrated understanding of our world and our relation to it, we need to bridge that cut.
The problem is: initial and boundary conditions do not generally behave in a lawlike manner. Instead, they must be established through measurement, which takes us back to the “self” side of the epistemic cut: there must be someone who measures, and hence is not part of what is being measured. This problem is most evident in quantum mechanics, where measurement is directly involved (at least in some interpretations) in the collapse of the wave equation. But the problem applies more generally: the necessity for a measurer (and her measurement device) always leaves a chunk of reality undetermined. Plus: what to measure and how is not given by the underlying laws. Instead, it depends on how we draw the boundaries around the box. But this is the problem of relevance, which has no formal solution. It is not a task of problem solving, of computation, but one of problem framing, of judgement, of formalisation itself.
The problem is that this subjective factor is not eliminable: if we include the measurer in our object of study, we lose the measurement. In fact, by redrawing the boundaries of our system in this way, we just create the need for another measurer outside the newly drawn boundaries! It’s measurers all the way down — an infinite regress. This is a bit embarrassing for physicists who strive for pure objectivity. And it is precisely the reason why they have largely ignored boundaries for most of the field’s history: initial and boundary conditions are simply considered to be given. They are there when the activity of the physicist begins. Physics is what happens within such boundaries. Lee Smolin calls this “physics in a box.” And none of the extensions we have outlined above take us beyond this fundamental limitation.
But here is the crux of the matter: when it comes to biological and social systems, boundaries are pretty much all that matters. Let’s call them constraints. They delimit the many possible ways in which different processes interact within a system. These interactions restrict and channel the dynamic behaviour of the underlying chemical or physical processes in a top-down manner. While the dynamics obviously remain subject to the laws of physics, constraints change over time in ways that are not lawlike, but radically contingent on the history and context of the particular interactions they delimit. For this reason, Pattee calls them nonholonomic (“not lawlike”) or nonintegrable (as in integrating a mathematical function), while Juarrero calls them context-dependent. Constraint evolution is just one damn thing after another.
One of the most obvious examples of nonholonomic constraints at work is Darwinian evolution by natural selection. It occurs when there is a population of individual organisms with heritable variation in their behaviour or appearance (their phenotype, as biologists call it). If this variation affects the likelihood and rate of reproduction, or relative fitness of an individual (compared to others) in a given environment, we get adaptation. Environment and organism constrain each other in ways that are radically dependent on the context provided by both. Selective pressures from the environment cull individuals that are maladapted. Organsmic behaviour, in turn, affects these selective pressures because the organism seeks out opportunities and avoids obstacles in its arena. As Richard Lewontin put it: the organism is both the subject and object of evolution. And here it is: organisms are systems that bridge epistemic cuts.
Which brings us straight back to our earlier discussion of (computational) complexity: the individual constraints of organismic behaviour and the population-level ones of natural selection are not random (as so many people falsely assume). They follow a recognisable logic (otherwise there would be no theory of evolution), but they are also not lawlike. How organism and environment influence each other is a co-constructive process that is fundamentally unpredictable. We’ll come back to that. As a first step, let’s just say that the adaptive dynamics generated by a population of individuals acting out their lives in an ever-changing environment are computationally irreducible. It’s a bit like predicting the weather: our ability to compress them into laws and predictions is severely limited to very local and peculiar contexts.
The challenge is now to establish how these irreducible constraint dynamics can be grounded in the underlying laws of physics. Against popular belief, especially among biologists, this is not something we have achieved already in any satisfactory manner. It is true, the theory of evolution does not obviously contradict any laws of physics. But most of our current models of evolution are not properly grounded in physics either. Take replicator theory, for example. As (in)famously popularised by Richard Dawkins’ “Selfish Gene,” it assumes that the fundamental unit of evolution is a replicator, most commonly assumed to be a gene encoded in DNA. There is only one problem: replicators don’t replicate outside a living cell! Naked-replicator evolution is astronomically unlikely, according to the laws of thermodynamics.
Howard Pattee calls these kinds of models of biological evolution physics-free. They are very common. In fact, there is an entire cottage industry (called artificial life) that produces them, and consistently fails to capture most of the interesting features of real-world biological evolution. This failure is as well-known as it is illuminating: it indicates that computational models of evolution are missing essential ingredients. One of them is a realistic model of the organism (more on that later). The other is the open-endedness that springs from the behaviour of that agent. Computationalism kills creativity. Who’d have thought.
Instead of more useless computer simulations, we need what Pattee calls a physics of symbols. We need to extend physics, yet again, to bridge the epistemic gap. As we have already seen, symbolic systems are always embedded in the physical world. They need to operate in accordance with the laws of physics, but are not determined by them. To observe, to encounter, to measure the ambience is to turn physics into symbols in unpredictable ways. To exert control, to act, is to turn symbols back into physics. When both of these processes are integrated, as tightly as possible, we get a living system with semiotic closure.
Closure
But, “wait a minute!” you may say at this point. Isn’t physics, by definition, the science dealing with the ambience? Isn’t it trespassing then if we extend physics to the domain of symbols? Well. The situation is a bit more complicated and — as you probably have come to expect by now — there is a strange loop at its core. So let us think a bit more about what it actually means to bridge the epistemic cut. Or what doesn’t.
As we’ve seen in the last section, it does not simply mean to come up with a physical description of a symbolic system. While this is entirely feasible for any symbolic system (as all of them are embedded in the physical domain), it completely misses the point. Think of a quantum mechanical account of the text you are reading right now: it captures the behaviour of every particle (electron, atom, molecule) that is involved in its display, either on paper or on screen, and in its perception, through your eyes into your brain. But you will never get the meaning you are extracting from the words from such a description, nor an explanation why it is that you are reading this particular sequence of words. It is simply the wrong kind of description for this purpose. A description in terms of physics merely establishes that the symbolic constraint-dynamics you are engaged with are compatible with the laws of nature, which is trivial. Otherwise, these dynamics could not be real, could not exist in the first place.
Neither is it useful to try and bridge the cut the other way around: through the symbolic description of physical systems. Once more, we achieve this trivially: anything we can describe must be described through our idiosyncratic human repertoire of symbols. This applies to physical descriptions too, of course. And the confusion between map and territory comes back to haunt us: just because we can describe physical processes in terms of symbolic computation (which, as you’ll remember, is a model of how humans draw inferences logically), these processes aren’t necessarily of a computational nature in and of themselves. In the end, physical and computational descriptions are nothing but alternative human perspectives on the same ambience. So we’re not bridging any cut here either: all our descriptions, scientific or not, remain within the horizon of our consciousness — strictly in the realm of symbols.
So we seem to be stuck with two separate sides of the same coin — the symbolic and the physical. The paradox we’re facing is this: without the epistemic cut, we cannot make any distinctions that matter; but with the cut, our different ways to experience and describe the world, our perspectives from the first and the third person, seem to remain forever disconnected, condemning us to recreate different forms of dualism over and over again. Mind and matter, subjective and objective, symbols and physics: these contrasts are grounded in the epistemic cut. And our failure to bridge this cut results in a massive blind spot that ails our contemporary science, our modern nomological order, as Adam Frank, Marcelo Gleiser, and Evan Thompson correctly point out.
The good news is: we can get past the blind spot! But in order to do this, we need to let go of some pretty deep-rooted assumptions. In particular, we require an account of the relationship between the two domains that challenges our received view of science, because it is not at all mechanistic.
The basic point is this: organisms can connect and commingle their symbolic and physical domains through the strange loop of closure, which is something non-living systems cannot do. Remember the waterfall: it does not even have a symbolic aspect (unless we impute one on it). Neither do rocks, the weather, or other inanimate objects and processes. They do not compute — it’s as simple as that! But even non-living systems that do have symbolic aspects fail to achieve closure. This includes every machine humans have ever built — and, in particular, computers and the algorithms that run on them. It also includes robots. Especially robots! They are machines too, after all. But we are not.
This distinction is crucial if we want to understand how organisms come to know their world. Ironically, this includes human machine thinkers: the way they engage their large world is also not mechanistic! Not at all. The joke is on them. We did warn you: this is going to be paradoxical. To understand why, we need to compare how a living organism and a computer are organised. This reveals why only the former can achieve semiotic closure, while the latter remains semantically isolated.
Because the primaeval epistemic cut occurs right at the origin of life, when the first self emerges in the primordial form of a simple cell, we will start by looking at how such a free-living cell constructs itself. This is the simplest example of a true self-manufacturing organism we can think of. Also: we will only outline the bare bones of our account in this section. It’ll be fleshed out later, we promise! But even a pretty cartoonish description can already give us the right kind of intuition about the fundamentally distinct ways by which machines and organismic selves relate their symbolic and physical aspects.
Remember: any living being is a limited entity intimately intermingled with its large world. What sets such an entity apart is its ability to self-manufacture. Indeed, this ability is also a need: needful freedom, as Hans Jonas calls it in his “Phenomenon of Life.” It’s the flipside to the thermodynamic predicament, which we’ve encountered earlier. The fact that we have to constantly invest energy to keep ourselves alive gives us a certain autonomy from our environment. What we do next, to a certain extent, depends on our own internal dynamics, how we allocate physical work to construct constraints on the interacting physico-chemical processes that constitute us. This is not a lack of determinism, but a peculiar form of circular causality: emerging constraints ensure that the constraint-making process itself continues. This is the meaning of closure: constraints building themselves upon previous constraints. We’ll make this notion precise later on. What’s important for now is that the autonomy, and hence agency, of a living being originates, at its core, from the self-referential way in which it builds the constraints that define it.
Rosen (repurposing Aristotle’s four causes) calls this kind of organisation of living matter closure to efficient causation, Maturana and Varela refer to it as operational closure, and (building on their) work, Alvaro Moreno, Matteo Mossio, and colleagues have been calling it organisational closure. All these terms mean more or less the same thing: they capture the autopoietic (“self-making” or “self-constructing”) nature of the living organism in a general sense. It’s what Jannie Hofmeyr means by self-manufacture. In fact, the idea goes all the way back to Immanuel Kant who wrote in his “Critique of Judgment” (1790) that “each part of an organism owes its presence to the agency of all remaining parts, and in turn exists for the sake of the whole. Each part is reciprocally both end and means.” This is the very essence of closure.
Now, there is an important and widespread misunderstanding we must deal with summarily: closure is not the same as cybernetic feedback, although both are forms of self-reference. But that’s where the similarities end. A living being does not function like a thermostat. A living being, instead, is what sets the target of the thermostat, because she wants the room to be at a comfortable temperature for herself. By analogy, closure involves a regulatory hierarchy and involves the autonomy of a self-manufacturing agent, while feedback is “flat” in its structure and purely automatic and mechanistic in nature.
Take, for example, the feedback regulation of a metabolic pathway: let’s say the product of the pathway inhibits its initial enzymatic step, repressing its own production as long as there is enough of itself around. Here, the regulator is a metabolite inhibiting its own synthesis. Everything happens at the level of (bio)chemical components and their reactions, or at the level of material cause, if you like Aristotle’s four causes. Closure, on the other hand, involves constraints that act on underlying chemical or physical processes to ensure the continued production of higher-level constraints. It involves all four of Aristotle’s causes and is not completely mechanistic. Think of the regulation of the enzyme that is inhibited in our example above: this happens at the genetic level, which is different from that of metabolism, while both are subject to the organisational closure of the cell. We’ll have more to say about this very soon.
Another important difference is that the closure living system exhibit occurs in the context of constructive (not computational) processes. Simulating it, and embodying it in the physical realm are very much not the same thing. Organisational closure not only involves the fabrication of the cell’s material parts, but also their assembly into a functioning whole. This is the self-manufacturing process that makes it possible for the organisation itself, as a whole, to persist in the physical domain.
We’ve said it before: in contrast to nonliving systems, which persist without effort, a living system must constantly invest work to stay alive. This requires coordinated regulation across levels — orchestrating the lawlike dynamics of the underlying physico-chemical component processes and the context-dependent constraint-based organisation of the whole. Without coherence across levels, the components won’t behave as they should, in that they won’t contribute to the continued existence of the whole. All of this involves plenty of feedback regulation but, evidently, there is much more to it than that!
This is the reason why we find it problematic to talk about homeostasis as the defining characteristic of a living cell. It is true that Claude Bernard, the originator of the concept (though not of the exact term), meant it in the same sense in which we use closure here. Later, however, “homeostasis” came to describe feedback — what the thermostat does. It is easy to confuse the two. That’s why we will not use this term in what follows, always explicitly referring to (organisational) closure instead.
But we digress. It’s time we get back to our example of the cell! Before building a full model, which we shall do in due time, we introduce a simpler story that illustrates the main principle of closure and brings us back to semiosis and the cell’s constructive relation to its own physical surroundings. This example concerns the genetic regulation of cellular components such as enzymes, transmembrane transporters, cytoskeletal proteins, and other macromolecular factors that are required for its self-maintenance.
Because we have already introduced them above, let us focus on enzymes. They are classical examples of cellular constraints. Most enzymes are proteins, but some are RNAs. They are catalysts, altering the kinetics, i.e., the rate at which a (bio)chemical reaction occurs. This is achieved by lowering the activation barrier of the reaction, enabling it to occur spontaneously if it is exergonic, i.e., if its thermodynamic potential allows it to proceed to a state of lower free energy. In this way, the totality of all the metabolic enzymes present in a cell together “lift” a specific set of biochemical pathways from the much larger set of reactions that could potentially occur according to the laws of thermodynamics. In this way, enzymes allow some (bio)chemical flows to outcompete others by proceeding more rapidly. This determines the overall metabolic state of the cell: different components will be fabricated under different circumstances.
An enzyme is a constraint on the underlying metabolic reaction: though altering the rate of the flow, it is not itself altered by the reaction it catalyses. However, as proteins and RNAs decay, and as the metabolic needs of the cell change over larger time scales, enzymes must be replenished through macromolecular synthesis. And this is where the story becomes truly interesting: the synthesis of macromolecules such as proteins and RNAs requires the genetic code, because the cell produces them by means of the genes that store their symbolic information in its genome (see also the appendix).
Gene expression is a classical example of a computational (and hence symbolic) process. Who said that cells don’t compute? Of course, they do! At the very least they require coding processes for their activities and (as we shall see) for their evolution. We don’t deny that cells contain processes that can be usefully described as computation. Instead, we argue that a computational perspective is not sufficient to capture all the possible activities of a living cell. The difference is crucial, as we’ll never tire of pointing out.
Without going into too much biological detail: gene expression occurs in two main (de)coding steps. These consist of the transcription of genomic DNA sequence into RNA, and the subsequent translation of the RNA into protein (unless you are a ribozyme and don’t need translation). Each step converts one string of symbols (e.g., a polymerised sequence of nucleic acid bases) into another (e.g., a sequence of amino acids, or polypeptide). What we end up with is called primary structure. This is the symbolic aspect of the cell: the genome contains semantic information that the cell “reads” to fabricate primary protein and RNA sequences. We could describe all of this at the level of the underlying physics. But then, we’d completely lose the symbolic meaning of what’s going on in the process.
Of course, there is a flipside of the coin: primary sequences are not (yet) functioning enzymes! They are just polypeptide- or RNA-spaghetti that first need to fold into a more compact conformation that enables their enzymatic function. This is how the active sites that mediate the catalytic action of the enzyme are assembled. And this is not a symbolic process. There is no one-to-one correspondence between local primary sequence and the structure of an active site. Instead, macromolecular folding is (non-equilibrium) thermodynamics: physics pure and simple. We still don’t understand this process in all its glorious details, but one thing is crystal clear: whether or not a protein or RNA folds into a functional conformation depends radically on cellular context. And this context, in turn, depends on the current state of the whole cell, on what Claude Bernard called the cellular milieu: the totality of metabolites, macromolecules, ions, and other components that crowd a cell’s interior.
It should be clear: macromolecular folding is not a classical mechanism, and neither is it algorithmic in nature. First of all, it depends on a system-level variable (the cellular milieu) that is irreducible to (subsets of) specific components and their interactions. Second, it is highly stochastic in nature, and does not proceed via any ordered sequence of sequential operations, like a proper mechanism or algorithm would. And, third, it involves nonlocal interactions between remote parts of the folding macromolecule to generate functional active sites.
All of this means that processes that are not symbolic in nature are also crucial for the continued self-maintenance of the cell. Without functional enzymes, there is no (de)coding of genetic information! Transcription and translation are intricate (bio)chemical processes that involve dozens of functional protein and RNA factors — including the cell’s polymerases that catalyse transcription, and the RNAs and proteins that make up the ribosome, where translation occurs. These factors must themselves be (de)coded from the genome and must fold into functional forms, thereby closing the hierarchical circle.
In sum: the cell’s physical aspect constructs its symbolic aspect, which in turn provides the substrate for the physical aspect. The two aspects represent processes that do not merely feed back on each other: they completely depend on each other for their mutual existence — indeed, they co-construct each other. Without folding, no coding (and vice versa). And all of this involves all the different levels of organisation in a cell: from molecule to the cellular milieu. It’s not just a molecular process.
This is what semiotic closure means. In Pattee’s words, it is the self-referential condition that enables you to be both on the subject and object side of the epistemic cut. Yes, that’s a strange loop: the enzymes required for macromolecular synthesis must themselves be (de)coded by the symbolic aspect of the cell (transcription/translation), and then be folded into functional form in the context of its physical aspect (cellular milieu). They are inseparable, intimately interdependent, woven and fused together like the two-sided surface of a Moebius strip.
And it is this interdependence that determines how the cell interacts with its surroundings: because it constructs itself in a codependent dance between symbols and physics that is driven by energy, matter, and information taken in from the environment, it is an autonomous agent. Semiotic closure is what enables a cell to have (at least some) control over how it manufactures itself and interacts with the world.
As Pattee points out, this is how cells and other organisms bridge the epistemic cut: perception (and measurement) turn physics into symbols, while autonomous control over the cell’s constraint-dynamics (including those that lead to actions and behaviour) turn symbols back into physics. The two go hand in hand. Life cannot exist if one of them is missing. Therefore, a physics that can deal with living systems, that can ground organismic organisation in the underlying laws of thermodynamics, must be a physics of symbols. It must pay close attention to the co-construction of context-dependent constraints — which is subject to, but not determined by, the underlying dynamic laws.
Now compare this to the relation between the symbolic (software) and physical (hardware) aspects of a von Neumann computer (i.e., pretty much any electronic device we use today). Remember, the von Neumann architecture is designed to mimic a universal Turing machine, to approximate it as closely as possible within the built-in limitations of the actual physical world. The whole point is to be able to run the widest achievable variety of algorithms on a single computer, in a way that is maximally independent from the hardware implementation. This is why we invent ever more sophisticated high-level programming languages. We want to remove symbolic computation from physics, software from hardware, as much as we can. This is why Pattee calls computers “semantic isolation boxes.” They achieve (very near) universal computation by keeping the symbolic and physical realms strictly separated.
In this sense, the architecture of our modern-day computers is as different from the organisation of an organism as it could possibly be. Organisms, as we have seen, entangle their symbolic processes with their physical aspects, so that both of them form a tight-knit co-constructive processual unit. Not only are they inseparable, but they build each other by constantly extracting energy, matter, and information from their physical surroundings. Think of another example: your cat versus your car. If you properly feed a kitten, you get a grown-up cat; but if you put fuel in your car, all you get is a slightly older, more used car (plus a bunch of exhaust). It should be clear from this: organisms are more like a dynamic vortex or whirlpool than a static mechanical or electronic machine: they are fluid, transient, constantly regenerating their structure by absorbing what is currently their environment. This is why it is often so difficult to delimit precisely where the physical boundaries of an organism really are. A living system is engaged in a constant process of becoming through constant interchange with its ambience.
It is because of these differences that it is so utterly misleading to compare a physically embedded organism to a physically embedded machine. Their ways of being embodied and embedded in the world are fundamentally different. Take a robot, for example. At first sight, it is similar to a living system. It is equipped with hardware sensors and effectors that allow it to interact with the physical world. It possesses channels of perception, which convey information to its symbolic processing core, and corresponding means of control, which impose internally caused effects on its physical surroundings. In addition, due to the near-universality of its computational architecture, a robot can be preprogrammed to react appropriately across a vast variety of situations. Using AI, we can even get it to “learn” correct reactions through the detection of unexpected correlations in its input data. In fact, robots may now mimic our own autonomous behaviour to an extent that can fool the most critical human observer.
But here is the thing: without the ability and, indeed, the need to self-manufacture, to constantly invest work into maintaining itself, the robot cannot want anything. It is not an autonomous agent. It completely fails to achieve either organisational or semiotic closure. Nothing has any meaning to it, because it has no control over its own continued existence. It has no arena, no affordances — no obstacles or opportunities — because the robot has no intrinsic goals it could pursue. There is no relevance for it to realise. Sensor input is analysed purely in reference to preprogrammed target functions, which are imposed from the outside, by an engineer or programmer (who is an autonomous agent). Nothing is good or bad for the robot itself. Consequently, a robot cannot make any real judgments. It does not even have an internal reason to exist. In fact, all its “actions” are purely automatic. We can simply switch it off between tasks. Or leave it on, as it cannot get bored. There is not a shred of real autonomy. It’s all just make-believe, the appearance of agency. Or algorithmic mimicry, as we call it.
Everything is ultimately preprogrammed and formalised in the small world of a robot system, even though often in a convoluted or indirect way so that we don’t immediately recognise it as such. Humans are easily fooled this way, ascribing agency to all sorts of regular or target-oriented patterns and behaviours in our ambience. But the robot will always be limited by its formal computing architecture: both hardware and software — maximally separated from each other. It cannot transcend these limitations, which means that no artificial system that is built on current design principles will ever achieve autonomy, or agency, or consciousness, for that matter.
Machines cannot bridge the epistemic cut and will never achieve semiotic closure. Unless the code inside a robot directly generates the material aspects of its hardware, it is not an autonomous agent in the same sense an organism is. We are very far from developing the materials and architectures required to artificially create such a system. One day, we may be able to do so. Who knows. We personally think that this won’t happen any time soon. Maybe never. And we’re not sure it is a wise thing to try and do. The question is: why would we want to generate artificial autonomous agents? They would not do what we want them to do, since they would have their own agenda, which they’d want to pursue. And, in any case, we doubt it makes sense to call such an artificial agent a machine. It’ll be genuinely alive, and we’d better treat and respect it as a living system.
Yet again, we have covered a lot of terrain in this chapter, and we have used several different maps to find our way through treacherous conceptual territory. At the end of this journey, we arrive at the following conclusion: only organisms achieve semiotic closure. Therefore, only organisms can bridge the primaeval epistemic cut. Only organisms can get to know their world, because only organisms have a perspective to get to know the world from. And it is to this perspective that we will turn next: if we look at our ambience, from our own point of view, we can recognise certain patterns that seem relevant to us. Therefore, our next task will be to pick out these patterns from the ambience and to properly sharpen their definition in a way that allows us to model them and relate them to each other in a rigorous scientific manner that we can share with other human beings. This is the core of the art of modelling. It will be the topic of our next chapter. One thing we can assure you: there will be plenty more epistemic cuts!
But, “wait a minute!” you may say at this point. Isn’t physics, by definition, the science dealing with the ambience? Isn’t it trespassing then if we extend physics to the domain of symbols? Well. The situation is a bit more complicated and — as you probably have come to expect by now — there is a strange loop at its core. So let us think a bit more about what it actually means to bridge the epistemic cut. Or what doesn’t.
As we’ve seen in the last section, it does not simply mean to come up with a physical description of a symbolic system. While this is entirely feasible for any symbolic system (as all of them are embedded in the physical domain), it completely misses the point. Think of a quantum mechanical account of the text you are reading right now: it captures the behaviour of every particle (electron, atom, molecule) that is involved in its display, either on paper or on screen, and in its perception, through your eyes into your brain. But you will never get the meaning you are extracting from the words from such a description, nor an explanation why it is that you are reading this particular sequence of words. It is simply the wrong kind of description for this purpose. A description in terms of physics merely establishes that the symbolic constraint-dynamics you are engaged with are compatible with the laws of nature, which is trivial. Otherwise, these dynamics could not be real, could not exist in the first place.
Neither is it useful to try and bridge the cut the other way around: through the symbolic description of physical systems. Once more, we achieve this trivially: anything we can describe must be described through our idiosyncratic human repertoire of symbols. This applies to physical descriptions too, of course. And the confusion between map and territory comes back to haunt us: just because we can describe physical processes in terms of symbolic computation (which, as you’ll remember, is a model of how humans draw inferences logically), these processes aren’t necessarily of a computational nature in and of themselves. In the end, physical and computational descriptions are nothing but alternative human perspectives on the same ambience. So we’re not bridging any cut here either: all our descriptions, scientific or not, remain within the horizon of our consciousness — strictly in the realm of symbols.
So we seem to be stuck with two separate sides of the same coin — the symbolic and the physical. The paradox we’re facing is this: without the epistemic cut, we cannot make any distinctions that matter; but with the cut, our different ways to experience and describe the world, our perspectives from the first and the third person, seem to remain forever disconnected, condemning us to recreate different forms of dualism over and over again. Mind and matter, subjective and objective, symbols and physics: these contrasts are grounded in the epistemic cut. And our failure to bridge this cut results in a massive blind spot that ails our contemporary science, our modern nomological order, as Adam Frank, Marcelo Gleiser, and Evan Thompson correctly point out.
The good news is: we can get past the blind spot! But in order to do this, we need to let go of some pretty deep-rooted assumptions. In particular, we require an account of the relationship between the two domains that challenges our received view of science, because it is not at all mechanistic.
The basic point is this: organisms can connect and commingle their symbolic and physical domains through the strange loop of closure, which is something non-living systems cannot do. Remember the waterfall: it does not even have a symbolic aspect (unless we impute one on it). Neither do rocks, the weather, or other inanimate objects and processes. They do not compute — it’s as simple as that! But even non-living systems that do have symbolic aspects fail to achieve closure. This includes every machine humans have ever built — and, in particular, computers and the algorithms that run on them. It also includes robots. Especially robots! They are machines too, after all. But we are not.
This distinction is crucial if we want to understand how organisms come to know their world. Ironically, this includes human machine thinkers: the way they engage their large world is also not mechanistic! Not at all. The joke is on them. We did warn you: this is going to be paradoxical. To understand why, we need to compare how a living organism and a computer are organised. This reveals why only the former can achieve semiotic closure, while the latter remains semantically isolated.
Because the primaeval epistemic cut occurs right at the origin of life, when the first self emerges in the primordial form of a simple cell, we will start by looking at how such a free-living cell constructs itself. This is the simplest example of a true self-manufacturing organism we can think of. Also: we will only outline the bare bones of our account in this section. It’ll be fleshed out later, we promise! But even a pretty cartoonish description can already give us the right kind of intuition about the fundamentally distinct ways by which machines and organismic selves relate their symbolic and physical aspects.
Remember: any living being is a limited entity intimately intermingled with its large world. What sets such an entity apart is its ability to self-manufacture. Indeed, this ability is also a need: needful freedom, as Hans Jonas calls it in his “Phenomenon of Life.” It’s the flipside to the thermodynamic predicament, which we’ve encountered earlier. The fact that we have to constantly invest energy to keep ourselves alive gives us a certain autonomy from our environment. What we do next, to a certain extent, depends on our own internal dynamics, how we allocate physical work to construct constraints on the interacting physico-chemical processes that constitute us. This is not a lack of determinism, but a peculiar form of circular causality: emerging constraints ensure that the constraint-making process itself continues. This is the meaning of closure: constraints building themselves upon previous constraints. We’ll make this notion precise later on. What’s important for now is that the autonomy, and hence agency, of a living being originates, at its core, from the self-referential way in which it builds the constraints that define it.
Rosen (repurposing Aristotle’s four causes) calls this kind of organisation of living matter closure to efficient causation, Maturana and Varela refer to it as operational closure, and (building on their) work, Alvaro Moreno, Matteo Mossio, and colleagues have been calling it organisational closure. All these terms mean more or less the same thing: they capture the autopoietic (“self-making” or “self-constructing”) nature of the living organism in a general sense. It’s what Jannie Hofmeyr means by self-manufacture. In fact, the idea goes all the way back to Immanuel Kant who wrote in his “Critique of Judgment” (1790) that “each part of an organism owes its presence to the agency of all remaining parts, and in turn exists for the sake of the whole. Each part is reciprocally both end and means.” This is the very essence of closure.
Now, there is an important and widespread misunderstanding we must deal with summarily: closure is not the same as cybernetic feedback, although both are forms of self-reference. But that’s where the similarities end. A living being does not function like a thermostat. A living being, instead, is what sets the target of the thermostat, because she wants the room to be at a comfortable temperature for herself. By analogy, closure involves a regulatory hierarchy and involves the autonomy of a self-manufacturing agent, while feedback is “flat” in its structure and purely automatic and mechanistic in nature.
Take, for example, the feedback regulation of a metabolic pathway: let’s say the product of the pathway inhibits its initial enzymatic step, repressing its own production as long as there is enough of itself around. Here, the regulator is a metabolite inhibiting its own synthesis. Everything happens at the level of (bio)chemical components and their reactions, or at the level of material cause, if you like Aristotle’s four causes. Closure, on the other hand, involves constraints that act on underlying chemical or physical processes to ensure the continued production of higher-level constraints. It involves all four of Aristotle’s causes and is not completely mechanistic. Think of the regulation of the enzyme that is inhibited in our example above: this happens at the genetic level, which is different from that of metabolism, while both are subject to the organisational closure of the cell. We’ll have more to say about this very soon.
Another important difference is that the closure living system exhibit occurs in the context of constructive (not computational) processes. Simulating it, and embodying it in the physical realm are very much not the same thing. Organisational closure not only involves the fabrication of the cell’s material parts, but also their assembly into a functioning whole. This is the self-manufacturing process that makes it possible for the organisation itself, as a whole, to persist in the physical domain.
We’ve said it before: in contrast to nonliving systems, which persist without effort, a living system must constantly invest work to stay alive. This requires coordinated regulation across levels — orchestrating the lawlike dynamics of the underlying physico-chemical component processes and the context-dependent constraint-based organisation of the whole. Without coherence across levels, the components won’t behave as they should, in that they won’t contribute to the continued existence of the whole. All of this involves plenty of feedback regulation but, evidently, there is much more to it than that!
This is the reason why we find it problematic to talk about homeostasis as the defining characteristic of a living cell. It is true that Claude Bernard, the originator of the concept (though not of the exact term), meant it in the same sense in which we use closure here. Later, however, “homeostasis” came to describe feedback — what the thermostat does. It is easy to confuse the two. That’s why we will not use this term in what follows, always explicitly referring to (organisational) closure instead.
But we digress. It’s time we get back to our example of the cell! Before building a full model, which we shall do in due time, we introduce a simpler story that illustrates the main principle of closure and brings us back to semiosis and the cell’s constructive relation to its own physical surroundings. This example concerns the genetic regulation of cellular components such as enzymes, transmembrane transporters, cytoskeletal proteins, and other macromolecular factors that are required for its self-maintenance.
Because we have already introduced them above, let us focus on enzymes. They are classical examples of cellular constraints. Most enzymes are proteins, but some are RNAs. They are catalysts, altering the kinetics, i.e., the rate at which a (bio)chemical reaction occurs. This is achieved by lowering the activation barrier of the reaction, enabling it to occur spontaneously if it is exergonic, i.e., if its thermodynamic potential allows it to proceed to a state of lower free energy. In this way, the totality of all the metabolic enzymes present in a cell together “lift” a specific set of biochemical pathways from the much larger set of reactions that could potentially occur according to the laws of thermodynamics. In this way, enzymes allow some (bio)chemical flows to outcompete others by proceeding more rapidly. This determines the overall metabolic state of the cell: different components will be fabricated under different circumstances.
An enzyme is a constraint on the underlying metabolic reaction: though altering the rate of the flow, it is not itself altered by the reaction it catalyses. However, as proteins and RNAs decay, and as the metabolic needs of the cell change over larger time scales, enzymes must be replenished through macromolecular synthesis. And this is where the story becomes truly interesting: the synthesis of macromolecules such as proteins and RNAs requires the genetic code, because the cell produces them by means of the genes that store their symbolic information in its genome (see also the appendix).
Gene expression is a classical example of a computational (and hence symbolic) process. Who said that cells don’t compute? Of course, they do! At the very least they require coding processes for their activities and (as we shall see) for their evolution. We don’t deny that cells contain processes that can be usefully described as computation. Instead, we argue that a computational perspective is not sufficient to capture all the possible activities of a living cell. The difference is crucial, as we’ll never tire of pointing out.
Without going into too much biological detail: gene expression occurs in two main (de)coding steps. These consist of the transcription of genomic DNA sequence into RNA, and the subsequent translation of the RNA into protein (unless you are a ribozyme and don’t need translation). Each step converts one string of symbols (e.g., a polymerised sequence of nucleic acid bases) into another (e.g., a sequence of amino acids, or polypeptide). What we end up with is called primary structure. This is the symbolic aspect of the cell: the genome contains semantic information that the cell “reads” to fabricate primary protein and RNA sequences. We could describe all of this at the level of the underlying physics. But then, we’d completely lose the symbolic meaning of what’s going on in the process.
Of course, there is a flipside of the coin: primary sequences are not (yet) functioning enzymes! They are just polypeptide- or RNA-spaghetti that first need to fold into a more compact conformation that enables their enzymatic function. This is how the active sites that mediate the catalytic action of the enzyme are assembled. And this is not a symbolic process. There is no one-to-one correspondence between local primary sequence and the structure of an active site. Instead, macromolecular folding is (non-equilibrium) thermodynamics: physics pure and simple. We still don’t understand this process in all its glorious details, but one thing is crystal clear: whether or not a protein or RNA folds into a functional conformation depends radically on cellular context. And this context, in turn, depends on the current state of the whole cell, on what Claude Bernard called the cellular milieu: the totality of metabolites, macromolecules, ions, and other components that crowd a cell’s interior.
It should be clear: macromolecular folding is not a classical mechanism, and neither is it algorithmic in nature. First of all, it depends on a system-level variable (the cellular milieu) that is irreducible to (subsets of) specific components and their interactions. Second, it is highly stochastic in nature, and does not proceed via any ordered sequence of sequential operations, like a proper mechanism or algorithm would. And, third, it involves nonlocal interactions between remote parts of the folding macromolecule to generate functional active sites.
All of this means that processes that are not symbolic in nature are also crucial for the continued self-maintenance of the cell. Without functional enzymes, there is no (de)coding of genetic information! Transcription and translation are intricate (bio)chemical processes that involve dozens of functional protein and RNA factors — including the cell’s polymerases that catalyse transcription, and the RNAs and proteins that make up the ribosome, where translation occurs. These factors must themselves be (de)coded from the genome and must fold into functional forms, thereby closing the hierarchical circle.
In sum: the cell’s physical aspect constructs its symbolic aspect, which in turn provides the substrate for the physical aspect. The two aspects represent processes that do not merely feed back on each other: they completely depend on each other for their mutual existence — indeed, they co-construct each other. Without folding, no coding (and vice versa). And all of this involves all the different levels of organisation in a cell: from molecule to the cellular milieu. It’s not just a molecular process.
This is what semiotic closure means. In Pattee’s words, it is the self-referential condition that enables you to be both on the subject and object side of the epistemic cut. Yes, that’s a strange loop: the enzymes required for macromolecular synthesis must themselves be (de)coded by the symbolic aspect of the cell (transcription/translation), and then be folded into functional form in the context of its physical aspect (cellular milieu). They are inseparable, intimately interdependent, woven and fused together like the two-sided surface of a Moebius strip.
And it is this interdependence that determines how the cell interacts with its surroundings: because it constructs itself in a codependent dance between symbols and physics that is driven by energy, matter, and information taken in from the environment, it is an autonomous agent. Semiotic closure is what enables a cell to have (at least some) control over how it manufactures itself and interacts with the world.
As Pattee points out, this is how cells and other organisms bridge the epistemic cut: perception (and measurement) turn physics into symbols, while autonomous control over the cell’s constraint-dynamics (including those that lead to actions and behaviour) turn symbols back into physics. The two go hand in hand. Life cannot exist if one of them is missing. Therefore, a physics that can deal with living systems, that can ground organismic organisation in the underlying laws of thermodynamics, must be a physics of symbols. It must pay close attention to the co-construction of context-dependent constraints — which is subject to, but not determined by, the underlying dynamic laws.
Now compare this to the relation between the symbolic (software) and physical (hardware) aspects of a von Neumann computer (i.e., pretty much any electronic device we use today). Remember, the von Neumann architecture is designed to mimic a universal Turing machine, to approximate it as closely as possible within the built-in limitations of the actual physical world. The whole point is to be able to run the widest achievable variety of algorithms on a single computer, in a way that is maximally independent from the hardware implementation. This is why we invent ever more sophisticated high-level programming languages. We want to remove symbolic computation from physics, software from hardware, as much as we can. This is why Pattee calls computers “semantic isolation boxes.” They achieve (very near) universal computation by keeping the symbolic and physical realms strictly separated.
In this sense, the architecture of our modern-day computers is as different from the organisation of an organism as it could possibly be. Organisms, as we have seen, entangle their symbolic processes with their physical aspects, so that both of them form a tight-knit co-constructive processual unit. Not only are they inseparable, but they build each other by constantly extracting energy, matter, and information from their physical surroundings. Think of another example: your cat versus your car. If you properly feed a kitten, you get a grown-up cat; but if you put fuel in your car, all you get is a slightly older, more used car (plus a bunch of exhaust). It should be clear from this: organisms are more like a dynamic vortex or whirlpool than a static mechanical or electronic machine: they are fluid, transient, constantly regenerating their structure by absorbing what is currently their environment. This is why it is often so difficult to delimit precisely where the physical boundaries of an organism really are. A living system is engaged in a constant process of becoming through constant interchange with its ambience.
It is because of these differences that it is so utterly misleading to compare a physically embedded organism to a physically embedded machine. Their ways of being embodied and embedded in the world are fundamentally different. Take a robot, for example. At first sight, it is similar to a living system. It is equipped with hardware sensors and effectors that allow it to interact with the physical world. It possesses channels of perception, which convey information to its symbolic processing core, and corresponding means of control, which impose internally caused effects on its physical surroundings. In addition, due to the near-universality of its computational architecture, a robot can be preprogrammed to react appropriately across a vast variety of situations. Using AI, we can even get it to “learn” correct reactions through the detection of unexpected correlations in its input data. In fact, robots may now mimic our own autonomous behaviour to an extent that can fool the most critical human observer.
But here is the thing: without the ability and, indeed, the need to self-manufacture, to constantly invest work into maintaining itself, the robot cannot want anything. It is not an autonomous agent. It completely fails to achieve either organisational or semiotic closure. Nothing has any meaning to it, because it has no control over its own continued existence. It has no arena, no affordances — no obstacles or opportunities — because the robot has no intrinsic goals it could pursue. There is no relevance for it to realise. Sensor input is analysed purely in reference to preprogrammed target functions, which are imposed from the outside, by an engineer or programmer (who is an autonomous agent). Nothing is good or bad for the robot itself. Consequently, a robot cannot make any real judgments. It does not even have an internal reason to exist. In fact, all its “actions” are purely automatic. We can simply switch it off between tasks. Or leave it on, as it cannot get bored. There is not a shred of real autonomy. It’s all just make-believe, the appearance of agency. Or algorithmic mimicry, as we call it.
Everything is ultimately preprogrammed and formalised in the small world of a robot system, even though often in a convoluted or indirect way so that we don’t immediately recognise it as such. Humans are easily fooled this way, ascribing agency to all sorts of regular or target-oriented patterns and behaviours in our ambience. But the robot will always be limited by its formal computing architecture: both hardware and software — maximally separated from each other. It cannot transcend these limitations, which means that no artificial system that is built on current design principles will ever achieve autonomy, or agency, or consciousness, for that matter.
Machines cannot bridge the epistemic cut and will never achieve semiotic closure. Unless the code inside a robot directly generates the material aspects of its hardware, it is not an autonomous agent in the same sense an organism is. We are very far from developing the materials and architectures required to artificially create such a system. One day, we may be able to do so. Who knows. We personally think that this won’t happen any time soon. Maybe never. And we’re not sure it is a wise thing to try and do. The question is: why would we want to generate artificial autonomous agents? They would not do what we want them to do, since they would have their own agenda, which they’d want to pursue. And, in any case, we doubt it makes sense to call such an artificial agent a machine. It’ll be genuinely alive, and we’d better treat and respect it as a living system.
Yet again, we have covered a lot of terrain in this chapter, and we have used several different maps to find our way through treacherous conceptual territory. At the end of this journey, we arrive at the following conclusion: only organisms achieve semiotic closure. Therefore, only organisms can bridge the primaeval epistemic cut. Only organisms can get to know their world, because only organisms have a perspective to get to know the world from. And it is to this perspective that we will turn next: if we look at our ambience, from our own point of view, we can recognise certain patterns that seem relevant to us. Therefore, our next task will be to pick out these patterns from the ambience and to properly sharpen their definition in a way that allows us to model them and relate them to each other in a rigorous scientific manner that we can share with other human beings. This is the core of the art of modelling. It will be the topic of our next chapter. One thing we can assure you: there will be plenty more epistemic cuts!
Previous: What It Is Like to Be a Scientist
|
Next: The Art of Modelling
|
The authors acknowledge funding from the John Templeton Foundation (Project ID: 62581), and would like to thank the co-leader of the project, Prof. Tarja Knuuttila, and the Department of Philosophy at the University of Vienna for hosting the project of which this book is a central part.
Disclaimer: everything we write and present here is our own responsibility. All mistakes are ours, and not the funders’ or our hosts’ and collaborators'.
Disclaimer: everything we write and present here is our own responsibility. All mistakes are ours, and not the funders’ or our hosts’ and collaborators'.