Reality
In the first part of the book, we criticised the machine view of the world — and, in particular, computationalism as its contemporary guise — for being based on unwarranted and outdated metaphysical assumptions that unnecessarily distort and restrict our access to the large world we live in. One of the central ambitions of this naïve realist approach is also its greatest shortcoming: the attempt to cut the human observer out of the picture, to gain a “view from nowhere,” to obtain objective knowledge of a world existing “out there,” detached from our imperfect nature as limited epistemic agents.
But while it’s easy to condemn, it’s much harder to construct. Our next task must therefore be to reconstruct after deconstruction: to (re)build a solid evidential and philosophical foundation for a science that is appropriate for the human species in the 21st century.
Right at the beginning, we must tackle two central questions: how is it possible for a limited epistemic agent to gain trustworthy and relevant knowledge of the world? And: how does scientific knowledge production differ from non-scientific ways by which humans get to know the world? This and the following chapters will address both of these questions. We not only defend science as a privileged way of knowing the world against irrational anti-scientism, we intend to make it stronger and wiser!
A basic point bears repeating at the outset: scientific knowledge should not be grounded in abstract scientific laws alone. Don’t get us wrong: we take the fundamental laws of physics extremely seriously (they are among the most solid knowledge we currently have about the world) and we require our philosophy to be compatible with them. Yet, we also acknowledge that these laws are a social construct, the product of necessarily situated, limited, and biassed human activity at a peculiar place and time in the history of the universe. The science of any age has been revised by later efforts, and there is no reason to assume that the scientific knowledge of our age is exceptional in this sense.
This obviously means that the fundamental laws of physics are not to be equated with reality! This would mean committing the fallacy of misplaced concreteness — mistaking the abstract for the concrete. Because that is what the laws of physics are: abstractions we use to describe certain phenomena, certain parts of reality. These laws (like any other scientific theory) remain forever open to future revisions. Scientific knowledge is not a thing. Instead, scientific knowledge production is an open-ended adaptive process, a kind of (very general) evolutionary dynamic. And this evolutionary process cannot simply rest in itself, but must be anchored in human experience — our encounter with the world, as individuals, as communities, and as a species of limited but intelligent adapted agents and observers.
To use the terminology introduced in the last chapter: science makes essential contributions to our nomological order by providing us with a powerful methodology for structuring our experience of the world. All human knowledge is ultimately based on personal experience — our own, plus that of our peers and ancestors. Therefore, a reasonable and defensible theory of knowledge (or epistemology) should be centred around the human epistemic agent — including our social interactions with other such agents.
In the previous chapter, we have described experience as a dynamic agent-arena relationship — in which an epistemic agent actively realises what is relevant through some kind of judgement, that is, the assignment of value to features picked out of a perceived environment. These features do not arrive prepackaged in the world “out there,” but arise transjectively — through the active interaction of agent and environment. This results in an affordance landscape — an assemblage of more or less salient opportunities and obstacles that make up the arena in which the agent acts. Our world is full of meaning that we generate for ourselves.
The agent-arena relationship, therefore, is where everything begins. But what is our arena, you may ask? Is it real, is it objective, or is it just subjective, imagined, made up? Is it itself an abstraction? An idealisation? Or is it something concrete and tangible? If we are to base our theory of knowledge and our science on the agent-arena relationship, we’d better be sure we know what we are talking about.
The thing is: the affordance landscape that is our arena appears to us humans entirely and exclusively as the content of our consciousness. This is what has led some people, including Descartes, to doubt the very existence of physical reality “out there,” and to split it off from our internal representations of it. We’ve talked about this before: it is Cartesian dualism, something we want to avoid at all costs.
Luckily, there is a way out: as we have argued already, we are agents that are embodied, and thus deeply embedded, in our physical environment. This is why it isn’t really useful (or reasonable) for us to doubt the existence of the “outside” world. And yet, all of human experience (and therefore all we can ever know) happens within the “space” provided by our consciousness — or, more broadly, the “space” that is the totality of our inner (physiological and cognitive) states. Even a radical empiricist like David Hume has to separate “relations of ideas” and “matters of fact.” All we can build from, after all, are our ideas. Experience is subjective. No Cartesian dualism is required to acknowledge that.
This conundrum lies at the root of many philosophical confusions. And it is confusing! Especially since human subjectivity and consciousness are such complicated and multilayered phenomena. We’re never even fully aware of our inner state, our attention focussing only on a tiny and rather fleeting part of it.
What’s worse: there isn’t a common definition of “consciousness” or “awareness” that everybody can agree on. We really do not want to open this particular can of worms here. What we want to emphasise, however, is that most contemporary science-based theories of consciousness firmly fall under the machine view of the world (one way or another). Therefore, we see little use in engaging them in any depth here. We’ll point out some useful exceptions as we go along.
However, the main issue is and remains that we aren’t really consciously aware of all that happens when we experience. In fact, for non-human organisms, this non-conscious kind of experience is likely the norm (surely if you lack a nervous system) and it is not like such creatures cannot come to know their world in their very own way! Philosophers all too often ignore this last observation, still today.
We’ll postpone the detailed resolution of these complications until later. For our current purpose — to begin formulating a naturalistic account of how we generate scientific knowledge — let us define consciousness in a broad, admittedly somewhat superficial, and rather human-centric way as “awareness and its objects.” To explicitly know something about the world, after all, requires us humans to be aware of it somehow in the first place. Defined in this way, consciousness provides the horizon within which scientific (and, in fact, any other kind of explicit propositional knowledge) can be generated. German phenomenologist Edmund Husserl was the first to express this idea.
Does your head hurt yet? Ours does, for sure. So let us take a break and recap: we established that everything a limited being can know about the world occurs within the horizon of its (collective) experience. More specifically, in the case of human (and especially scientific) knowledge, it occurs predominantly within the horizon of our consciousness. This is what we meant when we said that you cannot get a “view from nowhere.” Our knowledge will always be particularly human knowledge.
But this doesn’t imply that we cannot know anything about the world — that we’re trapped inside our heads. Quite the contrary. Here’s the trick: the domain of our ideas and their relations, inside the horizon of our consciousness, is neither closed in on itself, nor is it fixed or static. Because we are embodied and embedded in our world, it is constantly expanding and reconstituting itself, in a self-organising fashion that we will revisit later. What matters here are the two complementary causes for this constant transformation: new experiences and personal growth.
Let’s start with the former: the agent-arena relationship (which is continuously shaped and reshaped by our experiences) plays itself out mainly within the horizon of our consciousness. But how it plays out is not under our exclusive control. And it is certainly not confined to our heads.
Instead, our personal experiences are generated and shaped by our active encounters with the world. The first thing to note is that this involves our entire bodies, not just our brains and minds. Active exploration occurs through a tight interplay between perception and action. We do not get to know the world by sitting in our armchair. We get to know it by trying to act in the world in a coherent way.
As we all know, this involves falling on our face a lot. There is always a lot of learning still to do. And there is no learning without failure, without metabolising our errors, as Bill Wimsatt would have it.
This means that the large world we experience includes a substantial (but obviously unquantifiable) semantic residue: events that we have not yet experienced, features that we have not yet perceived, and phenomena that we have not yet picked out of our affordance landscape — either because they were inaccessible or irrelevant to us in the past. There are always aspects of the world we are yet to get to know. When we do encounter these, we try to interpret and include them into our knowledge. This can happen by mere trial and error, or in more systematic ways, as we shall see.
We can get to know the world’s semantic residue — our unknown unknowns — only by learning through active exploration. And more often than not, we accidentally bump into this unknown unknown. When this happens (and what we bump into is somehow relevant to us), we absorb and integrate this new experience in a coherent manner into our previous history of learning.
In fact, any limited being constantly encounters situations that it has never encountered before. A large world is an inexhaustible fount of surprises! It never ceases to amaze and bewilder. This feels scary to some, but it is a comforting thought to us. At the very least, scientists, philosophers, and artists (and everybody else, for that matter) will never run out of new things to discover, to study, and to create.
This also means that we cannot help but grow through experience: our encounters with the world generate inner states we’ve never experienced and possessed before. After a transformative learning experience, you literally see the world with different eyes, because you yourself have changed in the process, like (if you recall) the philosopher who escaped Plato’s cave. Anagoge is self-transcendence. The content of your consciousness transforms and expands in response to your experiences and the corresponding rearrangements it triggers in the physical system that is you. Your inner “space” keeps growing, but never ceases to be an inner “space”, occurring within your horizon of experience or conscious awareness.
To sum up: through the reciprocal co-constitution of our experiences and growth, triggered by encounters with a large world that is not under our control, our horizon of consciousness constantly expands and restructures itself. Alright. But this still leaves open the central question: what is reality, and how can we gain knowledge of it if all the knowledge we can ever have only exists within our personal horizon?
There are two issues here that are intimately related. First, how can we be sure that our knowledge of the world is accurate? How can we trust that we (or Decartes’ demon) are not deluding ourselves? Are we really connecting to reality? Which leads us to the second problem: how does the content of our consciousness relate to the large world we are interacting with? What is a good relation, in this context, and what is not? Do the contents of our heads have to somehow represent the nature of the “outside” world? These are the fundamental questions that will keep us busy until the end of this book.
So, we ask once more: how is it possible for us to gain trustworthy knowledge of the world? To properly understand this, we need to further clarify what the relation between agent and arena actually consists of. Being good naturalists, we do this according to our most up-to-date (scientific) knowledge (while taking care not to ground our account exclusively in any particular scientific law). In later parts of the book, we’ll use this kind of knowledge to construct a detailed model of the epistemic agent itself.
For now, let us note that what we aim (and are able) to achieve is a kind of robust resonance between our ideas and our external surroundings. They must be coupled in constructive and coherent ways for us to persist. This is what allows us to find our way around in our large world, to be at home in our universe.
Fundamentally — and again we are following philosophers such as Wimsatt and Chang — we think this is an engineering problem. This may come as a bit of a surprise. It’s true, we are strongly against viewing the world as a machine. Yet, at the same time, we see the production of solid knowledge as a kind of engineering process — that is, a systematic and principled construction. There’s no contradiction here.
The engineering problem we are talking about is how to construct our concepts and theories in a way that yields an appropriately tight relation between agent and arena, between our ideas and the world. This problem illustrates the mind-dependent aspect of our interaction with reality. When we frame ideas, when we formulate questions, when we express expectations and construct hypotheses, we are in control, and it is our own responsibility to do this in the best way we possibly can. In turn, our constructed expectations and concepts determine what we pick out as relevant in our experienced environment.
Since we can no longer rely on naïve realism — with its automated production of objective facts when the scientific method gets applied to solve problems (like an algorithm) — we must come up with new quality criteria to assess our newly gained knowledge. These could either be practical standards and rules, or more abstract epistemic criteria, that allow us to judge whether we are reliably picking out relevant features of our experienced environment. We’ll have more to say about this in the following sections.
For now we’ll leave you with this basic insight: the only reality we can actually attain as limited epistemic agents (through science or any other way of gaining sound knowledge) is based on concepts and models we construct in response to our experiential successes and failures during previous explorations, that is, our previous encounters (as individuals, as communities, or as a species) with the large world we live in.
Knowledge production and relevance realisation (especially when it comes to scientific knowledge), requires solid conceptual engineering which enables an adaptive evolutionary dynamic of individual and collective experiences that constantly readjust and tighten the relationship between agent and arena, between our knowledge and world, between the content of our consciousness and the part of reality that is not under our control but which we are able to meet, perceive, and interpret (in our idiosyncratic ways) through the guidance of our ideas. This is what it means to have (and maintain) a grip on reality.
If this relationship ceases to be adaptive (in any of the relevant ways), we can no longer survive and thrive in our environment. It is also plausible to assume that without having achieved such adaptation in the past, we would never have arrived at where we are now. Remember, once again: limited beings can only persist, i.e., maintain themselves, as long as they are adequately aligned with their environment.
Put simply, human experience does occur largely within the horizon of our consciousness. But this conscious experience is firmly embodied and embedded in the broader context of our agent-arena relationship, which actively expands and evolves through our explorations of the large world we live in. The world and our awareness of it form an adaptive co-evolving strange loop by which they mutually generate, transform, reinforce, and maintain each other. This loop represents our grip on reality, our way of connecting to a large world beyond our control. Keep this image in mind. This motif, a bit like a Moebius strip with two sides but only one surface, will reoccur many times in the chapters to come.
In the first part of the book, we criticised the machine view of the world — and, in particular, computationalism as its contemporary guise — for being based on unwarranted and outdated metaphysical assumptions that unnecessarily distort and restrict our access to the large world we live in. One of the central ambitions of this naïve realist approach is also its greatest shortcoming: the attempt to cut the human observer out of the picture, to gain a “view from nowhere,” to obtain objective knowledge of a world existing “out there,” detached from our imperfect nature as limited epistemic agents.
But while it’s easy to condemn, it’s much harder to construct. Our next task must therefore be to reconstruct after deconstruction: to (re)build a solid evidential and philosophical foundation for a science that is appropriate for the human species in the 21st century.
Right at the beginning, we must tackle two central questions: how is it possible for a limited epistemic agent to gain trustworthy and relevant knowledge of the world? And: how does scientific knowledge production differ from non-scientific ways by which humans get to know the world? This and the following chapters will address both of these questions. We not only defend science as a privileged way of knowing the world against irrational anti-scientism, we intend to make it stronger and wiser!
A basic point bears repeating at the outset: scientific knowledge should not be grounded in abstract scientific laws alone. Don’t get us wrong: we take the fundamental laws of physics extremely seriously (they are among the most solid knowledge we currently have about the world) and we require our philosophy to be compatible with them. Yet, we also acknowledge that these laws are a social construct, the product of necessarily situated, limited, and biassed human activity at a peculiar place and time in the history of the universe. The science of any age has been revised by later efforts, and there is no reason to assume that the scientific knowledge of our age is exceptional in this sense.
This obviously means that the fundamental laws of physics are not to be equated with reality! This would mean committing the fallacy of misplaced concreteness — mistaking the abstract for the concrete. Because that is what the laws of physics are: abstractions we use to describe certain phenomena, certain parts of reality. These laws (like any other scientific theory) remain forever open to future revisions. Scientific knowledge is not a thing. Instead, scientific knowledge production is an open-ended adaptive process, a kind of (very general) evolutionary dynamic. And this evolutionary process cannot simply rest in itself, but must be anchored in human experience — our encounter with the world, as individuals, as communities, and as a species of limited but intelligent adapted agents and observers.
To use the terminology introduced in the last chapter: science makes essential contributions to our nomological order by providing us with a powerful methodology for structuring our experience of the world. All human knowledge is ultimately based on personal experience — our own, plus that of our peers and ancestors. Therefore, a reasonable and defensible theory of knowledge (or epistemology) should be centred around the human epistemic agent — including our social interactions with other such agents.
In the previous chapter, we have described experience as a dynamic agent-arena relationship — in which an epistemic agent actively realises what is relevant through some kind of judgement, that is, the assignment of value to features picked out of a perceived environment. These features do not arrive prepackaged in the world “out there,” but arise transjectively — through the active interaction of agent and environment. This results in an affordance landscape — an assemblage of more or less salient opportunities and obstacles that make up the arena in which the agent acts. Our world is full of meaning that we generate for ourselves.
The agent-arena relationship, therefore, is where everything begins. But what is our arena, you may ask? Is it real, is it objective, or is it just subjective, imagined, made up? Is it itself an abstraction? An idealisation? Or is it something concrete and tangible? If we are to base our theory of knowledge and our science on the agent-arena relationship, we’d better be sure we know what we are talking about.
The thing is: the affordance landscape that is our arena appears to us humans entirely and exclusively as the content of our consciousness. This is what has led some people, including Descartes, to doubt the very existence of physical reality “out there,” and to split it off from our internal representations of it. We’ve talked about this before: it is Cartesian dualism, something we want to avoid at all costs.
Luckily, there is a way out: as we have argued already, we are agents that are embodied, and thus deeply embedded, in our physical environment. This is why it isn’t really useful (or reasonable) for us to doubt the existence of the “outside” world. And yet, all of human experience (and therefore all we can ever know) happens within the “space” provided by our consciousness — or, more broadly, the “space” that is the totality of our inner (physiological and cognitive) states. Even a radical empiricist like David Hume has to separate “relations of ideas” and “matters of fact.” All we can build from, after all, are our ideas. Experience is subjective. No Cartesian dualism is required to acknowledge that.
This conundrum lies at the root of many philosophical confusions. And it is confusing! Especially since human subjectivity and consciousness are such complicated and multilayered phenomena. We’re never even fully aware of our inner state, our attention focussing only on a tiny and rather fleeting part of it.
What’s worse: there isn’t a common definition of “consciousness” or “awareness” that everybody can agree on. We really do not want to open this particular can of worms here. What we want to emphasise, however, is that most contemporary science-based theories of consciousness firmly fall under the machine view of the world (one way or another). Therefore, we see little use in engaging them in any depth here. We’ll point out some useful exceptions as we go along.
However, the main issue is and remains that we aren’t really consciously aware of all that happens when we experience. In fact, for non-human organisms, this non-conscious kind of experience is likely the norm (surely if you lack a nervous system) and it is not like such creatures cannot come to know their world in their very own way! Philosophers all too often ignore this last observation, still today.
We’ll postpone the detailed resolution of these complications until later. For our current purpose — to begin formulating a naturalistic account of how we generate scientific knowledge — let us define consciousness in a broad, admittedly somewhat superficial, and rather human-centric way as “awareness and its objects.” To explicitly know something about the world, after all, requires us humans to be aware of it somehow in the first place. Defined in this way, consciousness provides the horizon within which scientific (and, in fact, any other kind of explicit propositional knowledge) can be generated. German phenomenologist Edmund Husserl was the first to express this idea.
Does your head hurt yet? Ours does, for sure. So let us take a break and recap: we established that everything a limited being can know about the world occurs within the horizon of its (collective) experience. More specifically, in the case of human (and especially scientific) knowledge, it occurs predominantly within the horizon of our consciousness. This is what we meant when we said that you cannot get a “view from nowhere.” Our knowledge will always be particularly human knowledge.
But this doesn’t imply that we cannot know anything about the world — that we’re trapped inside our heads. Quite the contrary. Here’s the trick: the domain of our ideas and their relations, inside the horizon of our consciousness, is neither closed in on itself, nor is it fixed or static. Because we are embodied and embedded in our world, it is constantly expanding and reconstituting itself, in a self-organising fashion that we will revisit later. What matters here are the two complementary causes for this constant transformation: new experiences and personal growth.
Let’s start with the former: the agent-arena relationship (which is continuously shaped and reshaped by our experiences) plays itself out mainly within the horizon of our consciousness. But how it plays out is not under our exclusive control. And it is certainly not confined to our heads.
Instead, our personal experiences are generated and shaped by our active encounters with the world. The first thing to note is that this involves our entire bodies, not just our brains and minds. Active exploration occurs through a tight interplay between perception and action. We do not get to know the world by sitting in our armchair. We get to know it by trying to act in the world in a coherent way.
As we all know, this involves falling on our face a lot. There is always a lot of learning still to do. And there is no learning without failure, without metabolising our errors, as Bill Wimsatt would have it.
This means that the large world we experience includes a substantial (but obviously unquantifiable) semantic residue: events that we have not yet experienced, features that we have not yet perceived, and phenomena that we have not yet picked out of our affordance landscape — either because they were inaccessible or irrelevant to us in the past. There are always aspects of the world we are yet to get to know. When we do encounter these, we try to interpret and include them into our knowledge. This can happen by mere trial and error, or in more systematic ways, as we shall see.
We can get to know the world’s semantic residue — our unknown unknowns — only by learning through active exploration. And more often than not, we accidentally bump into this unknown unknown. When this happens (and what we bump into is somehow relevant to us), we absorb and integrate this new experience in a coherent manner into our previous history of learning.
In fact, any limited being constantly encounters situations that it has never encountered before. A large world is an inexhaustible fount of surprises! It never ceases to amaze and bewilder. This feels scary to some, but it is a comforting thought to us. At the very least, scientists, philosophers, and artists (and everybody else, for that matter) will never run out of new things to discover, to study, and to create.
This also means that we cannot help but grow through experience: our encounters with the world generate inner states we’ve never experienced and possessed before. After a transformative learning experience, you literally see the world with different eyes, because you yourself have changed in the process, like (if you recall) the philosopher who escaped Plato’s cave. Anagoge is self-transcendence. The content of your consciousness transforms and expands in response to your experiences and the corresponding rearrangements it triggers in the physical system that is you. Your inner “space” keeps growing, but never ceases to be an inner “space”, occurring within your horizon of experience or conscious awareness.
To sum up: through the reciprocal co-constitution of our experiences and growth, triggered by encounters with a large world that is not under our control, our horizon of consciousness constantly expands and restructures itself. Alright. But this still leaves open the central question: what is reality, and how can we gain knowledge of it if all the knowledge we can ever have only exists within our personal horizon?
There are two issues here that are intimately related. First, how can we be sure that our knowledge of the world is accurate? How can we trust that we (or Decartes’ demon) are not deluding ourselves? Are we really connecting to reality? Which leads us to the second problem: how does the content of our consciousness relate to the large world we are interacting with? What is a good relation, in this context, and what is not? Do the contents of our heads have to somehow represent the nature of the “outside” world? These are the fundamental questions that will keep us busy until the end of this book.
So, we ask once more: how is it possible for us to gain trustworthy knowledge of the world? To properly understand this, we need to further clarify what the relation between agent and arena actually consists of. Being good naturalists, we do this according to our most up-to-date (scientific) knowledge (while taking care not to ground our account exclusively in any particular scientific law). In later parts of the book, we’ll use this kind of knowledge to construct a detailed model of the epistemic agent itself.
For now, let us note that what we aim (and are able) to achieve is a kind of robust resonance between our ideas and our external surroundings. They must be coupled in constructive and coherent ways for us to persist. This is what allows us to find our way around in our large world, to be at home in our universe.
Fundamentally — and again we are following philosophers such as Wimsatt and Chang — we think this is an engineering problem. This may come as a bit of a surprise. It’s true, we are strongly against viewing the world as a machine. Yet, at the same time, we see the production of solid knowledge as a kind of engineering process — that is, a systematic and principled construction. There’s no contradiction here.
The engineering problem we are talking about is how to construct our concepts and theories in a way that yields an appropriately tight relation between agent and arena, between our ideas and the world. This problem illustrates the mind-dependent aspect of our interaction with reality. When we frame ideas, when we formulate questions, when we express expectations and construct hypotheses, we are in control, and it is our own responsibility to do this in the best way we possibly can. In turn, our constructed expectations and concepts determine what we pick out as relevant in our experienced environment.
Since we can no longer rely on naïve realism — with its automated production of objective facts when the scientific method gets applied to solve problems (like an algorithm) — we must come up with new quality criteria to assess our newly gained knowledge. These could either be practical standards and rules, or more abstract epistemic criteria, that allow us to judge whether we are reliably picking out relevant features of our experienced environment. We’ll have more to say about this in the following sections.
For now we’ll leave you with this basic insight: the only reality we can actually attain as limited epistemic agents (through science or any other way of gaining sound knowledge) is based on concepts and models we construct in response to our experiential successes and failures during previous explorations, that is, our previous encounters (as individuals, as communities, or as a species) with the large world we live in.
Knowledge production and relevance realisation (especially when it comes to scientific knowledge), requires solid conceptual engineering which enables an adaptive evolutionary dynamic of individual and collective experiences that constantly readjust and tighten the relationship between agent and arena, between our knowledge and world, between the content of our consciousness and the part of reality that is not under our control but which we are able to meet, perceive, and interpret (in our idiosyncratic ways) through the guidance of our ideas. This is what it means to have (and maintain) a grip on reality.
If this relationship ceases to be adaptive (in any of the relevant ways), we can no longer survive and thrive in our environment. It is also plausible to assume that without having achieved such adaptation in the past, we would never have arrived at where we are now. Remember, once again: limited beings can only persist, i.e., maintain themselves, as long as they are adequately aligned with their environment.
Put simply, human experience does occur largely within the horizon of our consciousness. But this conscious experience is firmly embodied and embedded in the broader context of our agent-arena relationship, which actively expands and evolves through our explorations of the large world we live in. The world and our awareness of it form an adaptive co-evolving strange loop by which they mutually generate, transform, reinforce, and maintain each other. This loop represents our grip on reality, our way of connecting to a large world beyond our control. Keep this image in mind. This motif, a bit like a Moebius strip with two sides but only one surface, will reoccur many times in the chapters to come.
Model
Having established that limited epistemic agents can attain trustworthy knowledge of the world — that we can achieve relevant (though idiosyncratic and limited) perspectives on reality — we now ask: how does science generate knowledge that is particularly sound? Knowledge that generalises beyond subjective perspectives? How does science give us an exceptionally solid grip on reality? To answer these questions, we must first consider what sets science apart from other knowledge-producing activities.
We’ll begin by introducing the concept of a model of the world or, rather, of a particular aspect of the world. We’ll have more to say about what such an aspect could be in a short while. The basic idea that we’d like to get across for now is that every living being models its experienced environment in some way. Human science, therefore, is not fundamentally different from the way any limited being gets to know the world. It’s just a particularly sophisticated, articulate, coherent and robust way to model the world.
A word of caution is necessary at this point: philosophers always run the risk of oversimplifying things. So we need to be clear: we don’t mean to present some kind of unified theory of knowledge here, or pretend that all kinds of human learning or scientific inquiry neatly fit into a single simple template. Instead, we see the activity of modelling as one of the most central and fundamental aspects of a great variety of processes and procedures that allow us (and other limited beings) to get to know the world. It gives us a useful tool for thinking about scientific knowledge in relation to other ways of knowing.
It is important to highlight that we will be using the word “model” in a way that may be a little bit unusual and counterintuitive, and certainly broader than most people would use it in everyday language. A model, as we use the term here, need not be an explicit representation of an “outside” object or process. Nor does it need to be formalised using mathematical equations. It need not even be expressed in words. Think of it as some general “expectation” an organism may have towards its experienced environment that allows it to predict and assess the outcomes of its chosen actions.
For instance, a bacterium (one of the simplest living creatures) swims up a sugar gradient but away from a source of toxin, because evolution by natural selection has adapted it to do so. In the broadest sense of the term, the bacterium possesses an evolved internal model of its environment that consists of innate preferences and reflexive avoidance behaviours.
Evidently, this kind of primitive model is not representational: there is no physiological or genetic process inside the bacterium that corresponds to specific substances or their concentrations in the environment. The bacterium does not form any kind of internal image of its surroundings. Nor is it in any way aware of its model of the world, or intends to get to the source of the sugar (or away from the toxin). Nevertheless, its behaviour is an evolutionary adaptation which is anticipatory and goal-oriented.
The word “goal” may trigger some machine-thinking enthusiasts. Yet, there is no mystery in such goal-orientedness: the bacterium’s model of its experienced environment simply consists of a repertoire of anticipatory behaviours (and their underlying physiological causes) that are structurally coupled (as the result of evolutionary adaptation) to the kind of sensory input a bacterium may receive from its environment.
Specifically, the bacterium senses the presence of chemical substances in its immediate physical surroundings through receptor molecules in its cell membrane, which trigger a signalling cascade inside the cell that exerts control over the rotation of the bacterium’s hair-like flagellum in response to its sensory inputs. By switching the flagellar rotor from clockwise to counter-clockwise, the bacterium alternates between a random tumble and consistently swimming in a (more or less) straight line. This is what allows it to bias its zig-zag motion towards the food, or away from the poison.
The idea of structural coupling goes back to the 1970s, to biologist Humberto Maturana and cognitive scientist Francisco Varela, while the theory of biological anticipation was formulated around the same time by mathematical biologist Robert Rosen, who was also the first to suggest that every living organism (down to our humble bacterium) contains a set of predictive models about its (experienced) environment.
Let’s talk about Rosen’s theory first: he created a formal (mathematical) model of a cell as an anticipatory system, which consists of a sensory apparatus (e.g., molecular receptors and signalling cascades) and some kind of effector (the bacterium’s flagellar rotor) that can initiate actions and behaviour. These two subsystems of the cell are coupled together via internal predictive models, which receive their input from the sensors and causally modify the sensitivity of perception, and/or the dynamics of the effectors (in our case: the bacterium producing more receptor or changing the direction of the rotor).
The bacterium’s predictive models consist of some kind of mechanism (yes, mechanism) that compares input to model “prediction.” In the case of the bacterium, this means a tight coupling between readings of nutrient or toxin concentration in the surrounding medium over time and the frequency of changing rotor direction, which is the cause of the bacterium swimming in the appropriate direction. In other words, the bacterium anticipates that there will be more food at the top of the nutrient gradient, and less poison further away from the toxin source, without being aware in any way that it is doing so.
This coupling not only tightly connects sensory input and action, but also allows the organism to proactively attune its behaviour to its environment. Structural coupling is a coupling of a coupling, if you will. While the bacterium’s reaction to sensory input is purely mechanistic, its ability to anticipate nutrient or toxin concentration is not (we’ll explain why this is in a later chapter).
In sum, even a really simple creature like a bacterium carries within itself a set of models (in a very broad sense) that allow it to anticipate the outcome of its behaviour. These models are constantly adapted and improved by the evolutionary selection and preferential reproduction of those bacterial cells that make more precise predictions than others. In basic organisms (like bacteria) this happens by the blind trial-and-error “learning” process generated by random genetic mutations and subsequent selection.
In more sophisticated organisms (such as us humans), it can also happen through the cognitive generation and testing of predictions, so we can let our models die in our own stead when they generate the wrong kind of predictions. This is what neuroscientists call predictive coding. Mental models can indeed consist of abstract representations of the potential outcomes of a course of action but, again, they do not need to be explicitly representational. Think of a gut feeling that guides your decisions (we’ll get back to properly trained intuition, a very complicated topic, later on).
While our mental models are much more powerful (and intricate) than the basic anticipatory behaviour of a simple bacterium, the core principles remain exactly the same: we generate a set of expectations, based on which we bias the selection of a certain path of action, and then monitor whether our predictions are coming true, or whether our sensory input deviates from what was anticipated by our models. In the latter case, we had better correct our models (in more or less targeted and systematic ways) and adjust our course of action according to the predictions of the revised model.
Science is no different. In fact, it is nothing but a highly specialised and optimised way to make the formulation and adjustment of our models of the world much more efficient and rigorous. At first glance, it happens exclusively at the level of propositional knowledge and formalised reasoning. At least that’s what the traditional view which we associated with machine thinking and computationalism tells us. In reality, even scientific models often depend on all kinds of unspoken (and often unspeakable) experiences and skills underneath the rational arguments.
Think again about riding a bike: it is a practically learned skill that cannot be expressed entirely in terms of propositional knowledge. Doing empirical scientific work (and even generating scientific theories!) is exactly the same: a skill that needs to be acquired through practice. You can’t just tell someone how to do it, they have to learn it by doing. Even the purest mathematician will acknowledge that what they do cannot simply be replaced by explaining the required logical rules and symbols to a layperson without any experience in maths. Again, intuition matters! Neither can mathematical exploration be automated by some kind of algorithm. Even the most abstract work in the formal sciences is thus as much an art that involves the creative framing of questions as it is the mere rote application of formalised logical rules.
This is why we can think about science at heart being the art of modelling, another term that goes back to Robert Rosen. Once more, it is worth pointing out that we use the term “model” in an unusually broad sense here. Setting up a laboratory experiment, for example, qualifies as a modelling exercise in this sense: a good experiment sets out to test a specific set of hypotheses or predictions. It does so by abstracting and isolating a small number of relevant factors (to be observed in the experimental setting) from the messy entangled reality of the natural phenomenon to be understood. Like any other model, it sets clear boundaries to what we consider important, and backgrounds what we do not. In addition, a good experimental design provides controls that help us exclude confounding outside influences.
This is really not so different from what most people think of as a proper scientific model: a set of equations that abstracts and isolates a small number of relevant factors (called the state variables of the model), postulates a clearly defined boundary (through what modellers call initial and boundary conditions, backgrounding what is not relevant), and a number of control methods (such as sensitivity analysis) that help us rule out certain artifactual modelling results, i.e., unintended consequences arising from the practice or methodology of modelling itself.
Other types of models frequently used in scientific research include computer simulations, blueprints and other technical diagrams, physical (scale) models, and many of the more informal arguments that we use to express hypotheses, i.e., expectations, about some kind of phenomenon or observation. Considered at a general level, they all work in analogous ways, according to what we have just outlined.
And here’s the thing: even though we do our very best to make a scientific model of some aspect of the world explicit and rigorously logical, the way we first build the model is not so straightforward (or even rational). Because, once again, we bump into the problem of relevance: how to pick out those features of the large world that matter in our particular situation? As we have argued before, this is not something that we can solve in a formalised, entirely logical way. That’s why scientific modelling is indeed an art — although one of the highest art forms humanity has ever invented.
In sum: it is useful and accurate to consider science a kind of skillful modelling practice. It shares this central feature with all the other, diverse and historically evolving ways by which humans have been and are still creating reliable knowledge of the world in general. In fact, if researchers like Rosen, Maturana, and Varela have a point (and they do, as we shall argue in the last two parts of the book), then scientific knowledge generation is continuous with how all limited living beings get to know the world.
What a humbling thought! And it enables us to contextualise and ground our (scientific) knowledge in the nature of life itself and its many evolutionary adaptations. This is not arbitrary at all. Indeed, it is as solid a foundation as it gets. We no longer need to fall back onto the unrealistic and unattainable god’s-eye-view of objectivist naïve realism to achieve sound and rigorous insight into our large world.
But, “wait a minute!” you may say. How is this any different from the fallacy of grounding all our knowledge in the fundamental laws of physics? Aren’t we just replacing those by the (equally abstracted and open-to-revision) theories of ecology and evolution?
We do not think so, for two main reasons. First of all, we use evolutionary and ecological arguments, granted, but also ground our theory of knowledge on experience. As we said before, a naturalist approach takes advantage of all the knowledge currently available, but does not exclusively rely on it as a foundational principle. And second, in contrast to the fundamental laws of physics, our evolutionary account includes a detailed (and empirically testable) account of us (or indeed any limited intelligence) as the epistemic agents exploring our large worlds. We’ll come to that in the last two parts of the book.
Still, we are using the science of the human epistemic agent to come up with a philosophy for the human epistemic agent (and the other way around). Isn’t this viciously circular?
By now, it should be dawning on us that this kind of self-referentiality (or strange loop) is inescapable if we are to truly make sense of our relation with the world. And it is not necessarily vicious, as we shall show in the rest of the book. Granted, embracing such potentially paradoxical principles comes with a number of drawbacks. For instance, it clashes with the mechanistic view of science we have outlined previously. And we have to abandon absolute certainty as an achievable goal for a limited intelligence. It is unrealistic (and irrational) to believe that we can ever know anything without any doubt at all. But, in turn, we get a realistic view of science that goes beyond mechanism and actually works in practice for us limited humans — right here on Earth at the beginning of the 21st century.
On this view, science is not some completely distinct way of knowing, detached from everyday human learning and adaptation. Instead, it applies special standards and rules to the way by which we formulate models and validate them against our experiences in general. In other words, science is a particularly sophisticated way of using models as tools to understand and explain aspects of our surroundings that are relevant to us in our particular situation. In fact, we’d argue it is the best way we have come up with so far to structure our (individual and collective) experience of the large world (our nomological order). And: it gets better (more adapted to our changing environment) over time. What more do we really need?
Having established that limited epistemic agents can attain trustworthy knowledge of the world — that we can achieve relevant (though idiosyncratic and limited) perspectives on reality — we now ask: how does science generate knowledge that is particularly sound? Knowledge that generalises beyond subjective perspectives? How does science give us an exceptionally solid grip on reality? To answer these questions, we must first consider what sets science apart from other knowledge-producing activities.
We’ll begin by introducing the concept of a model of the world or, rather, of a particular aspect of the world. We’ll have more to say about what such an aspect could be in a short while. The basic idea that we’d like to get across for now is that every living being models its experienced environment in some way. Human science, therefore, is not fundamentally different from the way any limited being gets to know the world. It’s just a particularly sophisticated, articulate, coherent and robust way to model the world.
A word of caution is necessary at this point: philosophers always run the risk of oversimplifying things. So we need to be clear: we don’t mean to present some kind of unified theory of knowledge here, or pretend that all kinds of human learning or scientific inquiry neatly fit into a single simple template. Instead, we see the activity of modelling as one of the most central and fundamental aspects of a great variety of processes and procedures that allow us (and other limited beings) to get to know the world. It gives us a useful tool for thinking about scientific knowledge in relation to other ways of knowing.
It is important to highlight that we will be using the word “model” in a way that may be a little bit unusual and counterintuitive, and certainly broader than most people would use it in everyday language. A model, as we use the term here, need not be an explicit representation of an “outside” object or process. Nor does it need to be formalised using mathematical equations. It need not even be expressed in words. Think of it as some general “expectation” an organism may have towards its experienced environment that allows it to predict and assess the outcomes of its chosen actions.
For instance, a bacterium (one of the simplest living creatures) swims up a sugar gradient but away from a source of toxin, because evolution by natural selection has adapted it to do so. In the broadest sense of the term, the bacterium possesses an evolved internal model of its environment that consists of innate preferences and reflexive avoidance behaviours.
Evidently, this kind of primitive model is not representational: there is no physiological or genetic process inside the bacterium that corresponds to specific substances or their concentrations in the environment. The bacterium does not form any kind of internal image of its surroundings. Nor is it in any way aware of its model of the world, or intends to get to the source of the sugar (or away from the toxin). Nevertheless, its behaviour is an evolutionary adaptation which is anticipatory and goal-oriented.
The word “goal” may trigger some machine-thinking enthusiasts. Yet, there is no mystery in such goal-orientedness: the bacterium’s model of its experienced environment simply consists of a repertoire of anticipatory behaviours (and their underlying physiological causes) that are structurally coupled (as the result of evolutionary adaptation) to the kind of sensory input a bacterium may receive from its environment.
Specifically, the bacterium senses the presence of chemical substances in its immediate physical surroundings through receptor molecules in its cell membrane, which trigger a signalling cascade inside the cell that exerts control over the rotation of the bacterium’s hair-like flagellum in response to its sensory inputs. By switching the flagellar rotor from clockwise to counter-clockwise, the bacterium alternates between a random tumble and consistently swimming in a (more or less) straight line. This is what allows it to bias its zig-zag motion towards the food, or away from the poison.
The idea of structural coupling goes back to the 1970s, to biologist Humberto Maturana and cognitive scientist Francisco Varela, while the theory of biological anticipation was formulated around the same time by mathematical biologist Robert Rosen, who was also the first to suggest that every living organism (down to our humble bacterium) contains a set of predictive models about its (experienced) environment.
Let’s talk about Rosen’s theory first: he created a formal (mathematical) model of a cell as an anticipatory system, which consists of a sensory apparatus (e.g., molecular receptors and signalling cascades) and some kind of effector (the bacterium’s flagellar rotor) that can initiate actions and behaviour. These two subsystems of the cell are coupled together via internal predictive models, which receive their input from the sensors and causally modify the sensitivity of perception, and/or the dynamics of the effectors (in our case: the bacterium producing more receptor or changing the direction of the rotor).
The bacterium’s predictive models consist of some kind of mechanism (yes, mechanism) that compares input to model “prediction.” In the case of the bacterium, this means a tight coupling between readings of nutrient or toxin concentration in the surrounding medium over time and the frequency of changing rotor direction, which is the cause of the bacterium swimming in the appropriate direction. In other words, the bacterium anticipates that there will be more food at the top of the nutrient gradient, and less poison further away from the toxin source, without being aware in any way that it is doing so.
This coupling not only tightly connects sensory input and action, but also allows the organism to proactively attune its behaviour to its environment. Structural coupling is a coupling of a coupling, if you will. While the bacterium’s reaction to sensory input is purely mechanistic, its ability to anticipate nutrient or toxin concentration is not (we’ll explain why this is in a later chapter).
In sum, even a really simple creature like a bacterium carries within itself a set of models (in a very broad sense) that allow it to anticipate the outcome of its behaviour. These models are constantly adapted and improved by the evolutionary selection and preferential reproduction of those bacterial cells that make more precise predictions than others. In basic organisms (like bacteria) this happens by the blind trial-and-error “learning” process generated by random genetic mutations and subsequent selection.
In more sophisticated organisms (such as us humans), it can also happen through the cognitive generation and testing of predictions, so we can let our models die in our own stead when they generate the wrong kind of predictions. This is what neuroscientists call predictive coding. Mental models can indeed consist of abstract representations of the potential outcomes of a course of action but, again, they do not need to be explicitly representational. Think of a gut feeling that guides your decisions (we’ll get back to properly trained intuition, a very complicated topic, later on).
While our mental models are much more powerful (and intricate) than the basic anticipatory behaviour of a simple bacterium, the core principles remain exactly the same: we generate a set of expectations, based on which we bias the selection of a certain path of action, and then monitor whether our predictions are coming true, or whether our sensory input deviates from what was anticipated by our models. In the latter case, we had better correct our models (in more or less targeted and systematic ways) and adjust our course of action according to the predictions of the revised model.
Science is no different. In fact, it is nothing but a highly specialised and optimised way to make the formulation and adjustment of our models of the world much more efficient and rigorous. At first glance, it happens exclusively at the level of propositional knowledge and formalised reasoning. At least that’s what the traditional view which we associated with machine thinking and computationalism tells us. In reality, even scientific models often depend on all kinds of unspoken (and often unspeakable) experiences and skills underneath the rational arguments.
Think again about riding a bike: it is a practically learned skill that cannot be expressed entirely in terms of propositional knowledge. Doing empirical scientific work (and even generating scientific theories!) is exactly the same: a skill that needs to be acquired through practice. You can’t just tell someone how to do it, they have to learn it by doing. Even the purest mathematician will acknowledge that what they do cannot simply be replaced by explaining the required logical rules and symbols to a layperson without any experience in maths. Again, intuition matters! Neither can mathematical exploration be automated by some kind of algorithm. Even the most abstract work in the formal sciences is thus as much an art that involves the creative framing of questions as it is the mere rote application of formalised logical rules.
This is why we can think about science at heart being the art of modelling, another term that goes back to Robert Rosen. Once more, it is worth pointing out that we use the term “model” in an unusually broad sense here. Setting up a laboratory experiment, for example, qualifies as a modelling exercise in this sense: a good experiment sets out to test a specific set of hypotheses or predictions. It does so by abstracting and isolating a small number of relevant factors (to be observed in the experimental setting) from the messy entangled reality of the natural phenomenon to be understood. Like any other model, it sets clear boundaries to what we consider important, and backgrounds what we do not. In addition, a good experimental design provides controls that help us exclude confounding outside influences.
This is really not so different from what most people think of as a proper scientific model: a set of equations that abstracts and isolates a small number of relevant factors (called the state variables of the model), postulates a clearly defined boundary (through what modellers call initial and boundary conditions, backgrounding what is not relevant), and a number of control methods (such as sensitivity analysis) that help us rule out certain artifactual modelling results, i.e., unintended consequences arising from the practice or methodology of modelling itself.
Other types of models frequently used in scientific research include computer simulations, blueprints and other technical diagrams, physical (scale) models, and many of the more informal arguments that we use to express hypotheses, i.e., expectations, about some kind of phenomenon or observation. Considered at a general level, they all work in analogous ways, according to what we have just outlined.
And here’s the thing: even though we do our very best to make a scientific model of some aspect of the world explicit and rigorously logical, the way we first build the model is not so straightforward (or even rational). Because, once again, we bump into the problem of relevance: how to pick out those features of the large world that matter in our particular situation? As we have argued before, this is not something that we can solve in a formalised, entirely logical way. That’s why scientific modelling is indeed an art — although one of the highest art forms humanity has ever invented.
In sum: it is useful and accurate to consider science a kind of skillful modelling practice. It shares this central feature with all the other, diverse and historically evolving ways by which humans have been and are still creating reliable knowledge of the world in general. In fact, if researchers like Rosen, Maturana, and Varela have a point (and they do, as we shall argue in the last two parts of the book), then scientific knowledge generation is continuous with how all limited living beings get to know the world.
What a humbling thought! And it enables us to contextualise and ground our (scientific) knowledge in the nature of life itself and its many evolutionary adaptations. This is not arbitrary at all. Indeed, it is as solid a foundation as it gets. We no longer need to fall back onto the unrealistic and unattainable god’s-eye-view of objectivist naïve realism to achieve sound and rigorous insight into our large world.
But, “wait a minute!” you may say. How is this any different from the fallacy of grounding all our knowledge in the fundamental laws of physics? Aren’t we just replacing those by the (equally abstracted and open-to-revision) theories of ecology and evolution?
We do not think so, for two main reasons. First of all, we use evolutionary and ecological arguments, granted, but also ground our theory of knowledge on experience. As we said before, a naturalist approach takes advantage of all the knowledge currently available, but does not exclusively rely on it as a foundational principle. And second, in contrast to the fundamental laws of physics, our evolutionary account includes a detailed (and empirically testable) account of us (or indeed any limited intelligence) as the epistemic agents exploring our large worlds. We’ll come to that in the last two parts of the book.
Still, we are using the science of the human epistemic agent to come up with a philosophy for the human epistemic agent (and the other way around). Isn’t this viciously circular?
By now, it should be dawning on us that this kind of self-referentiality (or strange loop) is inescapable if we are to truly make sense of our relation with the world. And it is not necessarily vicious, as we shall show in the rest of the book. Granted, embracing such potentially paradoxical principles comes with a number of drawbacks. For instance, it clashes with the mechanistic view of science we have outlined previously. And we have to abandon absolute certainty as an achievable goal for a limited intelligence. It is unrealistic (and irrational) to believe that we can ever know anything without any doubt at all. But, in turn, we get a realistic view of science that goes beyond mechanism and actually works in practice for us limited humans — right here on Earth at the beginning of the 21st century.
On this view, science is not some completely distinct way of knowing, detached from everyday human learning and adaptation. Instead, it applies special standards and rules to the way by which we formulate models and validate them against our experiences in general. In other words, science is a particularly sophisticated way of using models as tools to understand and explain aspects of our surroundings that are relevant to us in our particular situation. In fact, we’d argue it is the best way we have come up with so far to structure our (individual and collective) experience of the large world (our nomological order). And: it gets better (more adapted to our changing environment) over time. What more do we really need?
Science
Now we have a framework to characterise science as an embodied cognitive process. But what about its social aspects? What is the function that scientific research serves in our contemporary society? What do we need science for at this point in human history? And what kind of minimal principles must a scientist adopt to be counted (by her fellow humans) as a proper scientist under our current circumstances?
Sociologist Robert K. Merton, in the 1950s, came up with a good answer to the first set of questions, that we think is still largely valid: the function of (the institution of) science in our society is to provide us with certified knowledge — knowledge with a quality label that we can trust. Note that we are talking about basic science here. Its function is not to develop new technology or to provide better health care, or a higher standard of living. Such applied benefits are fortuitous secondary consequences of the primary process of scientific knowledge production, whose quality we will be focussing on here.
How, then, do we ensure maximum achievable trustworthiness of the knowledge produced by science? One way is what Merton calls the ethos of science: the behavioural standards that make the production of scientific knowledge stand out compared to other ways of knowing. What kinds of standards and values must we adopt, what procedures must we commit to, to be accepted as proper scientists?
Merton identified four characteristics (which we now call Mertonian norms). They are still widely used, but they are a bit stuck in the machine view of the world in that they treat science as a formalisable (almost mechanical) activity. The basic idea is that, if you faithfully follow Merton’s rules, you are basically guaranteed to get sound knowledge from your scientific practice. Here are his four norms.
Universalism tells us that we should always strive for the largest possible generality. Scientific knowledge should be above our personal idiosyncrasies, or mere accidents of time or place. Knowledge claims must be judged by impersonal and (ideally) universally accepted criteria. We agree. But a few qualifications are in order: generality, as an abstract ideal, often fails where phenomena are heavily context-dependent. Think of the replication crisis in psychology or medical science, for example. Apart from bad statistics and a lack of quality standards, the fact that many complex phenomena simply cannot be replicated across different circumstances must also play a role. The temptation in such cases is to over-generalise, which is not effective or desirable.
Disinterestedness is another norm that demands detachment of scientific knowledge from our everyday concerns. It stipulates that scientists must act in a selfless manner when it comes to the pursuit of knowledge. There are many obvious examples of this norm breaking down with dire consequences: think of corrupt scientific shills acting in the interests of the tobacco or fossil fuel lobby to obfuscate and deny the ill-effects of smoking or global heating for their own personal gain. And yet, personal interests can also be put to good use in the pursuit of knowledge. In general, once we’ve accepted that scientific knowledge is fundamentally human knowledge — allowing us to find our way around our world — and is therefore always motivated and value-laden in one way or another, it becomes clear that this norm is just as unattainable as absolute and certain knowledge.
Merton (somewhat surprisingly) calls his third norm communism (often neutered in American editions of his work to communalism). This has nothing to do with political revolutions or the rise of the proletariat. Instead, it means the common and shared ownership of the results of scientific discovery — more or less what we mean by the less politically charged term “open science” today. With some minor qualifications (gain-of-function research on viruses is perhaps best kept secret), we believe this is still a good general guideline. But it is not really exclusive to science, is it? Only mushrooms grow in the dark: knowledge is best gained by sharing it, and this applies almost anywhere humans are trying to gain knowledge. Thus, communism (alas!) is also not what sets scientific knowledge production apart.
The fourth and final norm is organised scepticism. All ideas must be systematically put to the test. Again, we can agree with some qualifications. First, as we have already mentioned, all scientific practices and theories (including mechanistic and reductionist ones) come loaded with metaphysical baggage that cannot be empirically validated. We must acknowledge and learn to live with that fact. And second, it is obviously not very productive to doubt everything when doing science. Any scientific practice today is deeply embedded in a long and (hopefully) fruitful tradition, grounded on a corpus of knowledge that we usually take for granted when advancing new knowledge claims. If we were not standing on the shoulders of giants, we could not see as far as we do, and we could not progress beyond endless repetitions of the same experiments, over and over again. So scepticism isn’t universal in science either.
While some aspects of Merton’s norms are still valid and useful today, they fall short of giving us a unique and complete picture of what it means to gain knowledge in a scientific manner. They are too abstract and removed from actual scientific practice in a large and complex world to be of use. What we can learn from Merton’s efforts is that there is no easy way to prescribe norms that guarantee certified knowledge as an outcome. Instead, what norms to apply will usually depend on the situation.
While Merton focuses on the social function of science as an institution, it may be more straightforward to look at the personal commitments a researcher has to make to be counted as a proper scientist. This is the last question we asked at the beginning of this section. Perhaps, it is possible to agree on a basic personal commitment, a convention, agreed upon by the whole community of contemporary researchers, that determines whether what you do is science or not?
What about the commitment to methodological and explanatory naturalism? It is a good candidate, like the Hippocratic Oath for physicians. Put simply: no matter what your metaphysical worldview, whether you are religious or not, whether you believe in ghosts, or in the flying spaghetti monster, whether you are a reductionist or an anti-reductionist (or neither), whether you are a machine thinker, whatever your notion of reality, whatever your political views or moral values, however much you disagree with our views, you can be a researcher (maybe even a good one) but you cannot invoke supernatural or any other mysterious forces or entities in your practices, and explanations if you want to be a proper scientist.
This is why we require mathematical proofs without any inexplicable gaps in them. This is why scientists like mechanistic explanations that describe phenomena in terms of step-by-step causal chains from some starting point to some outcome. This is why teleology and circular explanations are banned. This is why vitalism, explaining life through enigmatic forces, or panpsychism, claiming that all matter is conscious in some mysterious way, are not proper scientific doctrine. Nor is creationism or any form of “intelligent” design. Nor is the transmutation of gold according to ancient alchemical traditions. They all go beyond the confines of naturalism. To a proper scientist, the only real magic is not really magic.
The problem, as we have mentioned before, is that “naturalism” is notoriously difficult to define in any precise or general way. What is considered scientific or natural changes (sometimes radically) over time. Astrology and alchemy used to be perfectly reasonable research disciplines, until they weren’t. We’re not sure about string theory at the moment. If it cannot be put to the test, is it really science?
There is no way around the basic fact that our understanding of the scientific method itself has undergone drastic changes throughout human history, and is likely to change again. The standards and rules that determine what is proper science constantly shift because they are personal and social conventions after all.
In fact, if you think about it, it is essential to recognize the mutable nature of “naturalism” if you are to buy into the argument we are trying to make here. It is our explicit aim to reform and extend what we mean by scientific knowledge! Naturalism evolves, it adapts. And it does so in lockstep with the evolution of scientific knowledge itself. This is a feature, not a bug, of our account! Indeed, it is a central pillar of any naturalistic philosophy of science.
That’s all perfectly plausible, you may say, but doesn’t it put us back to square one? It seems that we’ve been going around in circles. Yes, we have established that science is a modelling practice that is continuous with (and therefore grounded in) the way limited beings generally get in touch with the world. And, yes, we have special standards and rules for defining what proper science is — in particular, that it must be based on naturalistic methods and explanations. But these criteria are far from perfect or complete, and they can change at any moment in time. Doesn’t this expose us to the kind of post-truth relativism we are trying to avoid? Isn’t it all just social discourse and political power games in the end?
We really don’t think so. In the remainder of the book, we will show you many additional examples of what we think is good science. Focussing on examples is the best we can do. It illustrates that we can distinguish, most of the time, what is likely to work and be productive in a given situation. It’s just that we have to get used to the fluidity and self-referential nature of the whole business. Just as there is no absolutely certain knowledge, there is no fixed general framework for producing scientific knowledge. Digesting this insight can be a rather disorienting experience. Scientific practice, and the standards that let us assess its quality, co-evolve with one another. In fact, they constantly build on each other. This is as it should be! As we learn more about the world, we get better at learning itself.
But this does not mean that our assessment of scientific knowledge production is arbitrary, just that it is necessarily context-dependent, and rooted in actual scientific practice. When judging the quality of our knowledge, we are restricted to local and preliminary measures that remain open to revision. This may not be as good as a simple and concise list of universally and eternally valid criteria. It’s a work in constant progress. But at least it’s achievable. And this is all that really matters to us.
Now we have a framework to characterise science as an embodied cognitive process. But what about its social aspects? What is the function that scientific research serves in our contemporary society? What do we need science for at this point in human history? And what kind of minimal principles must a scientist adopt to be counted (by her fellow humans) as a proper scientist under our current circumstances?
Sociologist Robert K. Merton, in the 1950s, came up with a good answer to the first set of questions, that we think is still largely valid: the function of (the institution of) science in our society is to provide us with certified knowledge — knowledge with a quality label that we can trust. Note that we are talking about basic science here. Its function is not to develop new technology or to provide better health care, or a higher standard of living. Such applied benefits are fortuitous secondary consequences of the primary process of scientific knowledge production, whose quality we will be focussing on here.
How, then, do we ensure maximum achievable trustworthiness of the knowledge produced by science? One way is what Merton calls the ethos of science: the behavioural standards that make the production of scientific knowledge stand out compared to other ways of knowing. What kinds of standards and values must we adopt, what procedures must we commit to, to be accepted as proper scientists?
Merton identified four characteristics (which we now call Mertonian norms). They are still widely used, but they are a bit stuck in the machine view of the world in that they treat science as a formalisable (almost mechanical) activity. The basic idea is that, if you faithfully follow Merton’s rules, you are basically guaranteed to get sound knowledge from your scientific practice. Here are his four norms.
Universalism tells us that we should always strive for the largest possible generality. Scientific knowledge should be above our personal idiosyncrasies, or mere accidents of time or place. Knowledge claims must be judged by impersonal and (ideally) universally accepted criteria. We agree. But a few qualifications are in order: generality, as an abstract ideal, often fails where phenomena are heavily context-dependent. Think of the replication crisis in psychology or medical science, for example. Apart from bad statistics and a lack of quality standards, the fact that many complex phenomena simply cannot be replicated across different circumstances must also play a role. The temptation in such cases is to over-generalise, which is not effective or desirable.
Disinterestedness is another norm that demands detachment of scientific knowledge from our everyday concerns. It stipulates that scientists must act in a selfless manner when it comes to the pursuit of knowledge. There are many obvious examples of this norm breaking down with dire consequences: think of corrupt scientific shills acting in the interests of the tobacco or fossil fuel lobby to obfuscate and deny the ill-effects of smoking or global heating for their own personal gain. And yet, personal interests can also be put to good use in the pursuit of knowledge. In general, once we’ve accepted that scientific knowledge is fundamentally human knowledge — allowing us to find our way around our world — and is therefore always motivated and value-laden in one way or another, it becomes clear that this norm is just as unattainable as absolute and certain knowledge.
Merton (somewhat surprisingly) calls his third norm communism (often neutered in American editions of his work to communalism). This has nothing to do with political revolutions or the rise of the proletariat. Instead, it means the common and shared ownership of the results of scientific discovery — more or less what we mean by the less politically charged term “open science” today. With some minor qualifications (gain-of-function research on viruses is perhaps best kept secret), we believe this is still a good general guideline. But it is not really exclusive to science, is it? Only mushrooms grow in the dark: knowledge is best gained by sharing it, and this applies almost anywhere humans are trying to gain knowledge. Thus, communism (alas!) is also not what sets scientific knowledge production apart.
The fourth and final norm is organised scepticism. All ideas must be systematically put to the test. Again, we can agree with some qualifications. First, as we have already mentioned, all scientific practices and theories (including mechanistic and reductionist ones) come loaded with metaphysical baggage that cannot be empirically validated. We must acknowledge and learn to live with that fact. And second, it is obviously not very productive to doubt everything when doing science. Any scientific practice today is deeply embedded in a long and (hopefully) fruitful tradition, grounded on a corpus of knowledge that we usually take for granted when advancing new knowledge claims. If we were not standing on the shoulders of giants, we could not see as far as we do, and we could not progress beyond endless repetitions of the same experiments, over and over again. So scepticism isn’t universal in science either.
While some aspects of Merton’s norms are still valid and useful today, they fall short of giving us a unique and complete picture of what it means to gain knowledge in a scientific manner. They are too abstract and removed from actual scientific practice in a large and complex world to be of use. What we can learn from Merton’s efforts is that there is no easy way to prescribe norms that guarantee certified knowledge as an outcome. Instead, what norms to apply will usually depend on the situation.
While Merton focuses on the social function of science as an institution, it may be more straightforward to look at the personal commitments a researcher has to make to be counted as a proper scientist. This is the last question we asked at the beginning of this section. Perhaps, it is possible to agree on a basic personal commitment, a convention, agreed upon by the whole community of contemporary researchers, that determines whether what you do is science or not?
What about the commitment to methodological and explanatory naturalism? It is a good candidate, like the Hippocratic Oath for physicians. Put simply: no matter what your metaphysical worldview, whether you are religious or not, whether you believe in ghosts, or in the flying spaghetti monster, whether you are a reductionist or an anti-reductionist (or neither), whether you are a machine thinker, whatever your notion of reality, whatever your political views or moral values, however much you disagree with our views, you can be a researcher (maybe even a good one) but you cannot invoke supernatural or any other mysterious forces or entities in your practices, and explanations if you want to be a proper scientist.
This is why we require mathematical proofs without any inexplicable gaps in them. This is why scientists like mechanistic explanations that describe phenomena in terms of step-by-step causal chains from some starting point to some outcome. This is why teleology and circular explanations are banned. This is why vitalism, explaining life through enigmatic forces, or panpsychism, claiming that all matter is conscious in some mysterious way, are not proper scientific doctrine. Nor is creationism or any form of “intelligent” design. Nor is the transmutation of gold according to ancient alchemical traditions. They all go beyond the confines of naturalism. To a proper scientist, the only real magic is not really magic.
The problem, as we have mentioned before, is that “naturalism” is notoriously difficult to define in any precise or general way. What is considered scientific or natural changes (sometimes radically) over time. Astrology and alchemy used to be perfectly reasonable research disciplines, until they weren’t. We’re not sure about string theory at the moment. If it cannot be put to the test, is it really science?
There is no way around the basic fact that our understanding of the scientific method itself has undergone drastic changes throughout human history, and is likely to change again. The standards and rules that determine what is proper science constantly shift because they are personal and social conventions after all.
In fact, if you think about it, it is essential to recognize the mutable nature of “naturalism” if you are to buy into the argument we are trying to make here. It is our explicit aim to reform and extend what we mean by scientific knowledge! Naturalism evolves, it adapts. And it does so in lockstep with the evolution of scientific knowledge itself. This is a feature, not a bug, of our account! Indeed, it is a central pillar of any naturalistic philosophy of science.
That’s all perfectly plausible, you may say, but doesn’t it put us back to square one? It seems that we’ve been going around in circles. Yes, we have established that science is a modelling practice that is continuous with (and therefore grounded in) the way limited beings generally get in touch with the world. And, yes, we have special standards and rules for defining what proper science is — in particular, that it must be based on naturalistic methods and explanations. But these criteria are far from perfect or complete, and they can change at any moment in time. Doesn’t this expose us to the kind of post-truth relativism we are trying to avoid? Isn’t it all just social discourse and political power games in the end?
We really don’t think so. In the remainder of the book, we will show you many additional examples of what we think is good science. Focussing on examples is the best we can do. It illustrates that we can distinguish, most of the time, what is likely to work and be productive in a given situation. It’s just that we have to get used to the fluidity and self-referential nature of the whole business. Just as there is no absolutely certain knowledge, there is no fixed general framework for producing scientific knowledge. Digesting this insight can be a rather disorienting experience. Scientific practice, and the standards that let us assess its quality, co-evolve with one another. In fact, they constantly build on each other. This is as it should be! As we learn more about the world, we get better at learning itself.
But this does not mean that our assessment of scientific knowledge production is arbitrary, just that it is necessarily context-dependent, and rooted in actual scientific practice. When judging the quality of our knowledge, we are restricted to local and preliminary measures that remain open to revision. This may not be as good as a simple and concise list of universally and eternally valid criteria. It’s a work in constant progress. But at least it’s achievable. And this is all that really matters to us.
Robustness
In this same spirit, there is one more thing we should try: so far, we have focussed on scientific knowledge production as a peculiar kind of modelling activity, which we delimit by defining appropriate standards and rules for it to be accepted as proper science.
To get a complementary perspective, and to ground scientific knowledge further in our encounters with our large world, we can also look at the outcome of this activity. We can assess the quality of the knowledge claims that result from scientific investigation. What makes them worthy of certification by society? Rather than prescribing what is a valid scientific model, or what is a valid scientific explanation, we examine the knowledge that is generated in the process and see if it stands out compared to other kinds of (non-scientific) knowledge in some recognisable — ideally, measurable — way.
This has the drawback that we can only tell in retrospect if a given model or explanation yields scientific knowledge, but we may not be able to judge its quality while we’re using said model or explanation to gain knowledge in the first place. And also in this case, we should not expect a compact shortlist of definite criteria that render a knowledge claim scientific in a simple and clear-cut manner. The best we can hope for is that some knowledge claims turn out to be more scientific than others in a given context. In many cases, this will not be a matter of black and white. Recall string theory. Is it proper science or not? Even physicists themselves seem to disagree over the matter. So who are we to tell?
There is a vast philosophical literature on what makes knowledge scientific. We won’t even properly graze the surface, and certainly won’t do any justice to the long and distinguished history of debates on the issue. Instead, we will only pick out a few raisins that allow us to contextualise two contemporary approaches to the problem that we find particularly appealing and relevant to our argument.
Much of the discussion about the nature of scientific knowledge turns around the question how it can be properly validated against evidence. We hopefully all agree that knowledge claims that are not backed by any evidence (or, worse, that fly in the face of available evidence) should not be considered certified scientific knowledge. But what does it mean to validate knowledge empirically? It’s not entirely clear.
The logical positivists, for example, believed that only propositional knowledge that is derived logically and verified empirically should be counted as scientific (and, in fact, as proper knowledge in the first place). This sounds nice and simple, but it has serious problems. The first is that not even the logical positivists themselves agreed among themselves on what exactly they considered logical, or verified. Even worse, verification quickly turned out to be a logical impossibility, as famously demonstrated by Karl Popper, probably the most well-known philosopher of science. Quite the opposite, Popper argued: knowledge claims must be falsifiable to be scientific. You have to be able to refute your conjectures.
This causes its own set of problems. Popper never came up with a positive account of how knowledge does connect to reality. Some theories simply resist falsification, but that doesn’t make them true. He coined the term verisimilitude (“truth-likeness”) for such theories. But that doesn’t really help if we want a realistic theory of knowledge. Another problem with Popper’s account is that it is far too simple (once again) to capture the messy and diverse reality of scientific practice. Be that as it may, his insight that scientific knowledge should be subject to falsification (at least in principle) remains rock solid today.
Unfortunately, this criterion is (ab)used by many who consider themselves radical empiricists. They wield it to slay any philosophical assumption, concept, or theory that goes beyond what is immediately testable by experiment. The problem with this attitude is, as we have seen, that there is no empirical practice that is free of such metaphysical assumptions. So we have the paradoxical situation that we cannot do science without such assumptions, but we cannot put them to any empirical tests either.
The only way out of this conundrum is to accept that this is how life works, and to adjust our expectations accordingly. But the situation isn’t hopeless either. While we may not be able to put our metaphysical baggage to the test, we can judge it by other criteria. For instance, we can compare different metaphysical frameworks in terms of their coherence, or their explanatory power: what kind of empirical research do they enable? For us (as for any limited beings) that should be justification enough.
With this background in mind, let us now look at two particular proposals for what may be good epistemic criteria to judge the quality of scientific knowledge claims. We want to know how well our knowledge helps us find our way around in our large world. Once more, we turn to Bill Wimsatt for help. He suggests multiple determination, or robustness, as the most applicable quality criterion for (scientific) knowledge. A knowledge claim is robust if it can be detected, observed, derived, measured, manipulated, or otherwise accessed in many independent ways. As a consequence, such an item also tends to be testable by a variety of distinct experimental methods. This is a firmly grounded measure.
Wimsatt’s approach is based on a philosophical view of science that is perspectival. This is why it fits so well with what we have argued so far: each of the distinct ways of confirming an item of knowledge corresponds to a specific situated perspective. Each such perspective may be limited and idiosyncratic in its own way, because it comes from somewhere, and represents the view of someone. And, as we shall see, different perspectives may not neatly coincide or add up to a complete picture of the problem or phenomenon at hand. It does not matter: what is important is that they do not directly contradict each other and confirm a knowledge claim independently, that is, that they do not rely on each other or share common underlying biases or untested assumptions.
Robustness can be seen as an indicator of the trustworthiness (and hence value and certifiability) of knowledge. We can assess the robustness of a specific knowledge claim directly and in any given context by simply counting the number of independent ways by which it was generated, and by making sure that these different ways were truly independent of each other. Wimsatt calls this robustness analysis. It is applied empirical philosophy! Once we have confirmed its robustness, we know that a knowledge claim is reliable. It has become a scientific fact, if you will. This is of obvious practical importance: we can then begin to build more knowledge on top of it without (too much) continued scepticism about its validity.
One evident problem with this is that new discoveries or insights are never robust, by definition. Newly generated knowledge claims only become reliable over time, as they are confirmed and cross-validated by various independent means. The theory of life we present in the last two parts of the book, for example, is not particularly robust yet. We freely admit that. Still, we believe it is plausible enough to be a good candidate for becoming robustly established in the future. And we also trust it will prove useful, because of its internal coherence and explanatory potential. But we’re getting ahead of ourselves.
One thing is clear: the kind of after-the-fact measures for the quality of knowledge we are discussing in this section are not good guidelines for how to perform scientific research, or how to assess science in the making. And you don’t want to go for robustness if you aim to discover something exciting and new! Instead, robustness is useful in situations where you require a sound basis for your decision making.
And if what you have discovered remains sparsely validated over extended amounts of time, if nobody is able to confirm it by different means, then you surely do have reason to worry. At the very least, we recommend you not build too many extra claims on that piece of knowledge until it is independently confirmed. We also try to tread carefully here, for this very reason, focussing on the potential of our approach, rather than claiming we know with utmost confidence that it applies in all its glorious detail.
Beyond its rather conservative nature, robustness has other drawbacks. In particular, it excludes a measure of how useful an item of knowledge is in any given situation. Again, this affects novel knowledge claims in particular. We may claim that the theory of life we are describing here is very useful, even if it is not yet robustly confirmed. Many physicists think the same about string theory. For this reason, it would be useful if we could assess the usefulness of a particular knowledge claim, wouldn’t it?
This is what Hasok Chang attempts to achieve, whose notion of reality we have already used above and in the last chapter. Recall that, on Chang’s account, reality is what affects the way we act in the world. This is the hallmark of a type of philosophy called pragmatism. So it should come as no surprise that he also suggests judging our knowledge claims based on how they shape and enable our actions. This may be a bit counterintuitive at first: we are no longer assessing a knowledge claim based on its content, what it is saying about the world, but rather on the active influence it has on our lives.
We find this approach quite appealing. It nicely complements Wimsatt’s robustness account. To understand why, we need to revisit our distinction between abstract propositional knowledge and the more embodied kinds of knowing: procedural, perspectival, and participatory — or in Chang’s words: knowledge-as-information vs. knowledge-as-ability (or active knowledge). Wimsatt’s robustness works well for the former, because it allows us to assess how well the content of a knowledge claim connects to reality. Chang’s criterion, in contrast, focuses on the latter, since it measures the quality of the actions that result from our knowledge, its real-world consequences. The two approaches really do go hand in hand.
But how exactly do we measure the quality of actions that result from our newly gained knowledge? What exactly do we want to assess here? Remember: to know our world means to find our way around, to be at home in our universe. Let’s try and make this kind of vague intuition more precise: as an epistemic agent, active knowledge enables you to pick out the affordances in your arena that are truly relevant to you, that afford a productive path of action. It allows you to successfully pursue your goals. What we want is a measure for the probability of this kind of success. But what would be a good indicator to capture something so elusive? This is not a trivial question.
Chang argues that it is the aim-oriented coordination of our actions that we want to focus on. He calls this operational coherence: knowledge enables you to act in a way that makes sense to you. And: your actions make sense to you if they enable you to successfully pursue your goals. As Chang himself admits, this will need some more work before it becomes applicable in practice. It is not as clear cut as Wimsatt’s measure of robustness. But it certainly points in an interesting direction.
One major challenge is that what we consider coherent, what makes sense, can be rather subjective. And to be clear: Chang certainly doesn’t mean to imply that you should act selfishly to achieve your goals over those of other living agents. Quite the contrary: coherence must be sustainable in your environmental and social context. It is, at heart, a multi-level phenomenon. Operational coherence, if it is to serve our purpose, must not only apply at the level of the individual, but also at that of the community and species. We’ll come back to the problem of multi-level coherence later.
In the meantime, let’s focus on the core idea: to act coherently is to act in a manner that tightens our agent-arena relationship in an enduring and sustainable manner. This is what allows us to survive and thrive. If our knowledge ceases to connect robustly to reality, if it no longer leads to coherent action, we lose our grip on reality. We can see that happening all around us right now.
It is true: the old machine view has served us well since its inception at the dawn of the scientific revolution. Nobody will deny that our progress has been truly staggering. We can and should be proud of ourselves! But we are living through an era of rapidly diminishing returns. The signs are everywhere. And the toxic side effects of machine thinking are turning into existential threats. There is no way around it: the age of modernity is coming to an end. The machine view no longer applies — the crude map it provides is outdated, unable to cope with the complexities of our current challenges.
To regain our grip on reality, we need a new view of the world. And with it, a new kind of science that springs from it and supports it. The following chapters will provide an outline of what this may look like. It presents a view of science that refuses to coddle our minds with the promise of certain knowledge. Nor will it equip us with a set of standards that guarantee certified truth. It won’t fool us with the illusion of predictability or control. None of this was ever worthwhile, none of it achievable, for us limited beings.
It’s high time we move on! Let us evolve together with our worldview. Let us extend the horizon of our consciousness. It has happened before in human history. We can transcend our conceptual limitations in ways that nobody can foresee. We must recover our slipped grip on reality. And we have to direct the evolution of our agent-arena relationship in ways that are more productive and sustainable.
It’s true and we admit it openly: our own metaphysical assumptions may be just as untestable as those of the machine view. But this really cannot be avoided, as we have tried to explain. This is really not the point: what matters more is what we can do with our new framework that we cannot within the mental cage of machine thinking. What robustness, what coherence, what explanatory power do we gain? If our perspective opens up a wiser path into our future, then it is surely worth exploring!
Thomas Nagel’s essay “What it is like to be a bat” is one of the most highly cited papers in philosophy. One of its main conclusions is that we cannot ever know exactly what it is like to be a bat. But it seems plausible that the bat knows. The situation is quite similar for the scientist: it is impossible to come up with concise, definite, and universal definitions. And, yet, we hope that scientists know what it is like to be a scientist. We’ll have quite a bit more to say about that in the chapters to come.
In this same spirit, there is one more thing we should try: so far, we have focussed on scientific knowledge production as a peculiar kind of modelling activity, which we delimit by defining appropriate standards and rules for it to be accepted as proper science.
To get a complementary perspective, and to ground scientific knowledge further in our encounters with our large world, we can also look at the outcome of this activity. We can assess the quality of the knowledge claims that result from scientific investigation. What makes them worthy of certification by society? Rather than prescribing what is a valid scientific model, or what is a valid scientific explanation, we examine the knowledge that is generated in the process and see if it stands out compared to other kinds of (non-scientific) knowledge in some recognisable — ideally, measurable — way.
This has the drawback that we can only tell in retrospect if a given model or explanation yields scientific knowledge, but we may not be able to judge its quality while we’re using said model or explanation to gain knowledge in the first place. And also in this case, we should not expect a compact shortlist of definite criteria that render a knowledge claim scientific in a simple and clear-cut manner. The best we can hope for is that some knowledge claims turn out to be more scientific than others in a given context. In many cases, this will not be a matter of black and white. Recall string theory. Is it proper science or not? Even physicists themselves seem to disagree over the matter. So who are we to tell?
There is a vast philosophical literature on what makes knowledge scientific. We won’t even properly graze the surface, and certainly won’t do any justice to the long and distinguished history of debates on the issue. Instead, we will only pick out a few raisins that allow us to contextualise two contemporary approaches to the problem that we find particularly appealing and relevant to our argument.
Much of the discussion about the nature of scientific knowledge turns around the question how it can be properly validated against evidence. We hopefully all agree that knowledge claims that are not backed by any evidence (or, worse, that fly in the face of available evidence) should not be considered certified scientific knowledge. But what does it mean to validate knowledge empirically? It’s not entirely clear.
The logical positivists, for example, believed that only propositional knowledge that is derived logically and verified empirically should be counted as scientific (and, in fact, as proper knowledge in the first place). This sounds nice and simple, but it has serious problems. The first is that not even the logical positivists themselves agreed among themselves on what exactly they considered logical, or verified. Even worse, verification quickly turned out to be a logical impossibility, as famously demonstrated by Karl Popper, probably the most well-known philosopher of science. Quite the opposite, Popper argued: knowledge claims must be falsifiable to be scientific. You have to be able to refute your conjectures.
This causes its own set of problems. Popper never came up with a positive account of how knowledge does connect to reality. Some theories simply resist falsification, but that doesn’t make them true. He coined the term verisimilitude (“truth-likeness”) for such theories. But that doesn’t really help if we want a realistic theory of knowledge. Another problem with Popper’s account is that it is far too simple (once again) to capture the messy and diverse reality of scientific practice. Be that as it may, his insight that scientific knowledge should be subject to falsification (at least in principle) remains rock solid today.
Unfortunately, this criterion is (ab)used by many who consider themselves radical empiricists. They wield it to slay any philosophical assumption, concept, or theory that goes beyond what is immediately testable by experiment. The problem with this attitude is, as we have seen, that there is no empirical practice that is free of such metaphysical assumptions. So we have the paradoxical situation that we cannot do science without such assumptions, but we cannot put them to any empirical tests either.
The only way out of this conundrum is to accept that this is how life works, and to adjust our expectations accordingly. But the situation isn’t hopeless either. While we may not be able to put our metaphysical baggage to the test, we can judge it by other criteria. For instance, we can compare different metaphysical frameworks in terms of their coherence, or their explanatory power: what kind of empirical research do they enable? For us (as for any limited beings) that should be justification enough.
With this background in mind, let us now look at two particular proposals for what may be good epistemic criteria to judge the quality of scientific knowledge claims. We want to know how well our knowledge helps us find our way around in our large world. Once more, we turn to Bill Wimsatt for help. He suggests multiple determination, or robustness, as the most applicable quality criterion for (scientific) knowledge. A knowledge claim is robust if it can be detected, observed, derived, measured, manipulated, or otherwise accessed in many independent ways. As a consequence, such an item also tends to be testable by a variety of distinct experimental methods. This is a firmly grounded measure.
Wimsatt’s approach is based on a philosophical view of science that is perspectival. This is why it fits so well with what we have argued so far: each of the distinct ways of confirming an item of knowledge corresponds to a specific situated perspective. Each such perspective may be limited and idiosyncratic in its own way, because it comes from somewhere, and represents the view of someone. And, as we shall see, different perspectives may not neatly coincide or add up to a complete picture of the problem or phenomenon at hand. It does not matter: what is important is that they do not directly contradict each other and confirm a knowledge claim independently, that is, that they do not rely on each other or share common underlying biases or untested assumptions.
Robustness can be seen as an indicator of the trustworthiness (and hence value and certifiability) of knowledge. We can assess the robustness of a specific knowledge claim directly and in any given context by simply counting the number of independent ways by which it was generated, and by making sure that these different ways were truly independent of each other. Wimsatt calls this robustness analysis. It is applied empirical philosophy! Once we have confirmed its robustness, we know that a knowledge claim is reliable. It has become a scientific fact, if you will. This is of obvious practical importance: we can then begin to build more knowledge on top of it without (too much) continued scepticism about its validity.
One evident problem with this is that new discoveries or insights are never robust, by definition. Newly generated knowledge claims only become reliable over time, as they are confirmed and cross-validated by various independent means. The theory of life we present in the last two parts of the book, for example, is not particularly robust yet. We freely admit that. Still, we believe it is plausible enough to be a good candidate for becoming robustly established in the future. And we also trust it will prove useful, because of its internal coherence and explanatory potential. But we’re getting ahead of ourselves.
One thing is clear: the kind of after-the-fact measures for the quality of knowledge we are discussing in this section are not good guidelines for how to perform scientific research, or how to assess science in the making. And you don’t want to go for robustness if you aim to discover something exciting and new! Instead, robustness is useful in situations where you require a sound basis for your decision making.
And if what you have discovered remains sparsely validated over extended amounts of time, if nobody is able to confirm it by different means, then you surely do have reason to worry. At the very least, we recommend you not build too many extra claims on that piece of knowledge until it is independently confirmed. We also try to tread carefully here, for this very reason, focussing on the potential of our approach, rather than claiming we know with utmost confidence that it applies in all its glorious detail.
Beyond its rather conservative nature, robustness has other drawbacks. In particular, it excludes a measure of how useful an item of knowledge is in any given situation. Again, this affects novel knowledge claims in particular. We may claim that the theory of life we are describing here is very useful, even if it is not yet robustly confirmed. Many physicists think the same about string theory. For this reason, it would be useful if we could assess the usefulness of a particular knowledge claim, wouldn’t it?
This is what Hasok Chang attempts to achieve, whose notion of reality we have already used above and in the last chapter. Recall that, on Chang’s account, reality is what affects the way we act in the world. This is the hallmark of a type of philosophy called pragmatism. So it should come as no surprise that he also suggests judging our knowledge claims based on how they shape and enable our actions. This may be a bit counterintuitive at first: we are no longer assessing a knowledge claim based on its content, what it is saying about the world, but rather on the active influence it has on our lives.
We find this approach quite appealing. It nicely complements Wimsatt’s robustness account. To understand why, we need to revisit our distinction between abstract propositional knowledge and the more embodied kinds of knowing: procedural, perspectival, and participatory — or in Chang’s words: knowledge-as-information vs. knowledge-as-ability (or active knowledge). Wimsatt’s robustness works well for the former, because it allows us to assess how well the content of a knowledge claim connects to reality. Chang’s criterion, in contrast, focuses on the latter, since it measures the quality of the actions that result from our knowledge, its real-world consequences. The two approaches really do go hand in hand.
But how exactly do we measure the quality of actions that result from our newly gained knowledge? What exactly do we want to assess here? Remember: to know our world means to find our way around, to be at home in our universe. Let’s try and make this kind of vague intuition more precise: as an epistemic agent, active knowledge enables you to pick out the affordances in your arena that are truly relevant to you, that afford a productive path of action. It allows you to successfully pursue your goals. What we want is a measure for the probability of this kind of success. But what would be a good indicator to capture something so elusive? This is not a trivial question.
Chang argues that it is the aim-oriented coordination of our actions that we want to focus on. He calls this operational coherence: knowledge enables you to act in a way that makes sense to you. And: your actions make sense to you if they enable you to successfully pursue your goals. As Chang himself admits, this will need some more work before it becomes applicable in practice. It is not as clear cut as Wimsatt’s measure of robustness. But it certainly points in an interesting direction.
One major challenge is that what we consider coherent, what makes sense, can be rather subjective. And to be clear: Chang certainly doesn’t mean to imply that you should act selfishly to achieve your goals over those of other living agents. Quite the contrary: coherence must be sustainable in your environmental and social context. It is, at heart, a multi-level phenomenon. Operational coherence, if it is to serve our purpose, must not only apply at the level of the individual, but also at that of the community and species. We’ll come back to the problem of multi-level coherence later.
In the meantime, let’s focus on the core idea: to act coherently is to act in a manner that tightens our agent-arena relationship in an enduring and sustainable manner. This is what allows us to survive and thrive. If our knowledge ceases to connect robustly to reality, if it no longer leads to coherent action, we lose our grip on reality. We can see that happening all around us right now.
It is true: the old machine view has served us well since its inception at the dawn of the scientific revolution. Nobody will deny that our progress has been truly staggering. We can and should be proud of ourselves! But we are living through an era of rapidly diminishing returns. The signs are everywhere. And the toxic side effects of machine thinking are turning into existential threats. There is no way around it: the age of modernity is coming to an end. The machine view no longer applies — the crude map it provides is outdated, unable to cope with the complexities of our current challenges.
To regain our grip on reality, we need a new view of the world. And with it, a new kind of science that springs from it and supports it. The following chapters will provide an outline of what this may look like. It presents a view of science that refuses to coddle our minds with the promise of certain knowledge. Nor will it equip us with a set of standards that guarantee certified truth. It won’t fool us with the illusion of predictability or control. None of this was ever worthwhile, none of it achievable, for us limited beings.
It’s high time we move on! Let us evolve together with our worldview. Let us extend the horizon of our consciousness. It has happened before in human history. We can transcend our conceptual limitations in ways that nobody can foresee. We must recover our slipped grip on reality. And we have to direct the evolution of our agent-arena relationship in ways that are more productive and sustainable.
It’s true and we admit it openly: our own metaphysical assumptions may be just as untestable as those of the machine view. But this really cannot be avoided, as we have tried to explain. This is really not the point: what matters more is what we can do with our new framework that we cannot within the mental cage of machine thinking. What robustness, what coherence, what explanatory power do we gain? If our perspective opens up a wiser path into our future, then it is surely worth exploring!
Thomas Nagel’s essay “What it is like to be a bat” is one of the most highly cited papers in philosophy. One of its main conclusions is that we cannot ever know exactly what it is like to be a bat. But it seems plausible that the bat knows. The situation is quite similar for the scientist: it is impossible to come up with concise, definite, and universal definitions. And, yet, we hope that scientists know what it is like to be a scientist. We’ll have quite a bit more to say about that in the chapters to come.
Previous: The Lost Narrative
|
Next: Epistemic Cuts
|
The authors acknowledge funding from the John Templeton Foundation (Project ID: 62581), and would like to thank the co-leader of the project, Prof. Tarja Knuuttila, and the Department of Philosophy at the University of Vienna for hosting the project of which this book is a central part.
Disclaimer: everything we write and present here is our own responsibility. All mistakes are ours, and not the funders’ or our hosts’ and collaborators'.
Disclaimer: everything we write and present here is our own responsibility. All mistakes are ours, and not the funders’ or our hosts’ and collaborators'.