to compute, v.
transitive: to determine, especially by mathematical means
transitive: to determine, especially by mathematical means
In this appendix, you will find some additional definitions and clarifications concerning the tricky concept of “computation” beyond Church and Turing’s formal but rather narrow conceptualisation. What is it about? What kinds of computation exist? How do they relate to the notion of “mechanism?” And how do they relate to the natural behaviour of physical systems?
Computing
We’ve already mentioned in chapter 6 that the verb “to compute” entered the English language around the time of the scientific revolution from the latin “computare:” “to count, sum up, reckon together.” Let us examine its meaning in some more detail.
“To determine by mathematical means” — that is, to calculate — implies a human activity, carried out by rote according to a finite and well-defined sequence of operations drawn from a finite set of well-defined rules. This set of rules is based on formal logic — the study of correct reasoning. As a formal science, formal logic is itself subject to all our arguments about the grounding and evolution of scientific knowledge, which means that its precise definition (like anything else) is limited and biassed by human (cognitive) capabilities, and has changed in significant ways over historical time. Not to mention that there are complications that make it impossible to ground mathematics entirely in formal logic. We touch upon these complications in the main text. But be that as it may: in practice, it will suffice to say that computation is an activity based on some finite and well-defined set of logically consistent rules.
Computation concerns the manipulation of abstract symbols or signs, independent of their (contextual) meaning or physical actualisation. The great pragmatist philosopher Charles Sanders Peirce coined the term semiotics for the formal study of such sign processes. Semiotics focuses on the general nature of signs as used by an intelligence capable of learning by experience. All forms of logical inference — induction, deduction, and abduction — are examples of formal sign processes that humans use for reasoning.
Peirce distinguishes between the sign or symbol itself (the signifier), the object it refers to (the signified), and the interpretant, that is, the intelligent being performing semiosis — the interpretation from signifier to signified. Semiosis, therefore, is the process of meaning-making or sense-making. It crucially involves the signified object as the semantic content or reference of a sign.
Are you confused yet? We understand if you are. So let’s summarise what we’ve said so far: signs and symbols are abstract tokens that relate to each other syntactically according to some well-defined and consistent rules, while also relating semantically to the objects they represent. In other words, syntax is exclusively concerned with the abstract relations between symbols (in the symbolic domain), while semantics is concerned with their meaning, that is, the real-world objects the symbols represent (in the physical domain). The process of semiosis moves from the symbolic to the physical domain, to ground the abstract symbols and give meaning to them in the context of the large world we live in. In contrast, syntactic processes reside entirely in the small world of rule-based symbol manipulation, i.e., computation.
One thing should be crystal clear already: computation cannot be the same as meaning-making or sense-making. The former is purely syntactic; the latter necessarily involves semantics.
Apart from referring semantically to physical objects, symbols and signs can be actualised in symbol or sign vehicles, which are also physical structures. Take, for instance, the various computing devices described in chapter 7. As another example, consider the chemical bases of the DNA in your genome: adenine (A), cytosine (C), guanine (G), and thymine (T). These chemicals are symbol vehicles for the abstract genetic code, which consists of the triplet combinations of letters A, C, G, and T. These triplets are called codons. The code semantically relates codons on a strand of DNA to specific amino acids to be incorporated into polypeptide (protein) chains through the processes of transcription and translation. The abstract letter triples are the symbols (signifiers), the nucleotide bases on the DNA strand are the symbol vehicles, and the amino acids in the polypeptide chain to be synthesised are the signified.
Note that the genetic code is both arbitrary and degenerate. In fact, it has to be. Which triplet corresponds to which amino acid is not strictly determined by any physical laws, but is a frozen historical accident — a natural convention. Moreover, the physical structures that are the symbol vehicles (base triplets in DNA) are distinct from the physical objects that are represented by the code (amino acids in a polypeptide chain). Finally, there are 64 triplet codons, but only 21 amino acids that occur in natural proteins. Therefore, multiple codons refer to one and the same amino acid, e.g., TTA, TTG, CTT, CTC, CTA, and CTG all lead to the incorporation of leucine. Degeneracy makes the code robust. Arbitrariness is an essential feature of coding in general, and is a fundamental requirement for open-ended evolution, as we explain in detail in the main text.
Note that the process of computing is entirely syntactic, even if it is implemented on a physical substrate (such as a von Neumann computer or a strand of DNA). It is concerned only with relations between abstract symbols, not their semantic content — what these symbols stand for, what they are representations of, in the physical domain. This is why computation is so easy to automate: it is completely context-independent, and does not involve any messy interpretation, improvisation, or any other form of creativity. On the downside, computing is inflexible, allowing no bending of the rules. It therefore requires no real agency beyond the rote application of successive operations to symbols according to a predetermined schedule, which is called an algorithm or program.
We cannot stress this often enough: “computing” describes a purely symbolic and syntactic activity that human beings can perform, and the theory of computation is an abstract formalisation of this very peculiar human activity. Turing machines stand for human computers. The original symbol vehicle was a flesh-and-blood human being, with their brain, which is thus (by definition) a computer in the sense of being able to achieve universal computation (at least in principle, given enough time and resources).
However, the brain is not only a computer! It can do many things besides computing. Being judgmental, feeling emotions, or experiencing cravings are three good examples of typical human activities that heavily involve cognitive processes in the brain but are not computational in nature. And, obviously, the brain cannot have evolved to compute. Computing is only a tiny subset of what a human brain can do. And this capability emerged very late in evolution, in some unknown animal ancestor whose nervous system had become complex enough to allow for abstract symbol manipulation.
We’ve already mentioned in chapter 6 that the verb “to compute” entered the English language around the time of the scientific revolution from the latin “computare:” “to count, sum up, reckon together.” Let us examine its meaning in some more detail.
“To determine by mathematical means” — that is, to calculate — implies a human activity, carried out by rote according to a finite and well-defined sequence of operations drawn from a finite set of well-defined rules. This set of rules is based on formal logic — the study of correct reasoning. As a formal science, formal logic is itself subject to all our arguments about the grounding and evolution of scientific knowledge, which means that its precise definition (like anything else) is limited and biassed by human (cognitive) capabilities, and has changed in significant ways over historical time. Not to mention that there are complications that make it impossible to ground mathematics entirely in formal logic. We touch upon these complications in the main text. But be that as it may: in practice, it will suffice to say that computation is an activity based on some finite and well-defined set of logically consistent rules.
Computation concerns the manipulation of abstract symbols or signs, independent of their (contextual) meaning or physical actualisation. The great pragmatist philosopher Charles Sanders Peirce coined the term semiotics for the formal study of such sign processes. Semiotics focuses on the general nature of signs as used by an intelligence capable of learning by experience. All forms of logical inference — induction, deduction, and abduction — are examples of formal sign processes that humans use for reasoning.
Peirce distinguishes between the sign or symbol itself (the signifier), the object it refers to (the signified), and the interpretant, that is, the intelligent being performing semiosis — the interpretation from signifier to signified. Semiosis, therefore, is the process of meaning-making or sense-making. It crucially involves the signified object as the semantic content or reference of a sign.
Are you confused yet? We understand if you are. So let’s summarise what we’ve said so far: signs and symbols are abstract tokens that relate to each other syntactically according to some well-defined and consistent rules, while also relating semantically to the objects they represent. In other words, syntax is exclusively concerned with the abstract relations between symbols (in the symbolic domain), while semantics is concerned with their meaning, that is, the real-world objects the symbols represent (in the physical domain). The process of semiosis moves from the symbolic to the physical domain, to ground the abstract symbols and give meaning to them in the context of the large world we live in. In contrast, syntactic processes reside entirely in the small world of rule-based symbol manipulation, i.e., computation.
One thing should be crystal clear already: computation cannot be the same as meaning-making or sense-making. The former is purely syntactic; the latter necessarily involves semantics.
Apart from referring semantically to physical objects, symbols and signs can be actualised in symbol or sign vehicles, which are also physical structures. Take, for instance, the various computing devices described in chapter 7. As another example, consider the chemical bases of the DNA in your genome: adenine (A), cytosine (C), guanine (G), and thymine (T). These chemicals are symbol vehicles for the abstract genetic code, which consists of the triplet combinations of letters A, C, G, and T. These triplets are called codons. The code semantically relates codons on a strand of DNA to specific amino acids to be incorporated into polypeptide (protein) chains through the processes of transcription and translation. The abstract letter triples are the symbols (signifiers), the nucleotide bases on the DNA strand are the symbol vehicles, and the amino acids in the polypeptide chain to be synthesised are the signified.
Note that the genetic code is both arbitrary and degenerate. In fact, it has to be. Which triplet corresponds to which amino acid is not strictly determined by any physical laws, but is a frozen historical accident — a natural convention. Moreover, the physical structures that are the symbol vehicles (base triplets in DNA) are distinct from the physical objects that are represented by the code (amino acids in a polypeptide chain). Finally, there are 64 triplet codons, but only 21 amino acids that occur in natural proteins. Therefore, multiple codons refer to one and the same amino acid, e.g., TTA, TTG, CTT, CTC, CTA, and CTG all lead to the incorporation of leucine. Degeneracy makes the code robust. Arbitrariness is an essential feature of coding in general, and is a fundamental requirement for open-ended evolution, as we explain in detail in the main text.
Note that the process of computing is entirely syntactic, even if it is implemented on a physical substrate (such as a von Neumann computer or a strand of DNA). It is concerned only with relations between abstract symbols, not their semantic content — what these symbols stand for, what they are representations of, in the physical domain. This is why computation is so easy to automate: it is completely context-independent, and does not involve any messy interpretation, improvisation, or any other form of creativity. On the downside, computing is inflexible, allowing no bending of the rules. It therefore requires no real agency beyond the rote application of successive operations to symbols according to a predetermined schedule, which is called an algorithm or program.
We cannot stress this often enough: “computing” describes a purely symbolic and syntactic activity that human beings can perform, and the theory of computation is an abstract formalisation of this very peculiar human activity. Turing machines stand for human computers. The original symbol vehicle was a flesh-and-blood human being, with their brain, which is thus (by definition) a computer in the sense of being able to achieve universal computation (at least in principle, given enough time and resources).
However, the brain is not only a computer! It can do many things besides computing. Being judgmental, feeling emotions, or experiencing cravings are three good examples of typical human activities that heavily involve cognitive processes in the brain but are not computational in nature. And, obviously, the brain cannot have evolved to compute. Computing is only a tiny subset of what a human brain can do. And this capability emerged very late in evolution, in some unknown animal ancestor whose nervous system had become complex enough to allow for abstract symbol manipulation.
Analog/Digital
There are qualitatively different kinds of computation. One important and fundamental distinction to make is whether the symbols to be manipulated are based on discrete or continuous representations (signifier-signified relationships). For the analog case, imagine a mechanical device that is sensitive to some physical force, or an electronic circuit receiving an electric current with a certain voltage as its input. For most practical purposes (ignoring micro-level complications such as discrete electrons and quantum effects) we can say that these kinds of input vary smoothly over some range of values.
Devices that employ such continuous symbolic representations are called analog computers. An example is the (mechanical or electronic) differential analyser, used to accurately integrate systems of differential equations with real-valued parameters. Such analog computers can, in principle, solve problems which are not Turing-computable. For instance, they can find solutions for chaotic systems with infinite precision. Of course, such devices are limited in their accuracy in practice, and it is possible to approximate them using digital computers to an arbitrary degree of precision.
That’s one of the main reasons analog computers are a bit out of fashion these days. But the situation may change again, because of one form of analog computing that is particularly useful for machine learning, and is rapidly gaining popularity these days. This kind of analog symbol manipulation is called neuromorphic computing. Neuromorphic devices consist of artificial “neurons” with connections (called “synapses”) among each other that have some (real-valued) weight, indicating their respective strength. Such networks can “learn” to find solutions for some target task by smoothly adjusting and optimising their weights. They correspond to programmable hardware implementations of software-based artificial neural networks, which are heavily used in artificial intelligence research. Neuromorphic devices are a form of highly parallel analog computing that is not evidently algorithmic: information is processed in ways that are not transparent or sequential, and depend on the exact values of the connection weights.
A digital computer, in contrast, is a machine that operates with discrete representations. Most modern computing devices are digital or, to be more precise, binary. The only two symbols they recognise are 0 and 1. Numbers, text, and all kinds of other data and programming instructions are represented in binary code. Logical operations are then performed on coded strings of 0s and 1s.
The conceptual and practical advantages of digital computing are immediately evident from Turing’s model. Turing machines provide a precise definition of what we mean by “effective computation,” “machine,” and “algorithm,” as explained in detail in the first section of chapter 7. On top of all this, the model immediately suggests an engineering blueprint for real-world computing machines. In the case of analog computing, in contrast, we lack equivalent definitions, we cannot derive a well-defined set of computable functions, and we have no equivalent to the concept of computational universality, which means that analog computers have to be built to particular task-oriented specifications, which tends to make them less useful, versatile, and thus marketable in practice.
While digital algorithms are precisely circumscribed, we can interpret pretty much any physical process as an analog form of computation (see our example of the waterfall in the main text). Is it possible to say, then, that any physical system is carrying out some kind of computation? That computation, defined in its broadest possible sense, is nothing but physical change?
There is a wide range of opinions on this topic. We don’t quite see the point of renaming “physical dynamics” by “computation.” In fact, this conceptual conjurer’s trick seems unnecessarily confusing. Let us remember our original definition of computation as symbol manipulation. If any physical process computes just by changing, does that mean that any physical system performs symbol manipulation? How does this make sense? It is not immediately obvious, that’s for sure. But we’re not quite ready to address this problem yet. To find proper answers, we first have to deal with the relationship between computations and mechanisms, as well as the semantic aspects of symbol manipulation.
There are qualitatively different kinds of computation. One important and fundamental distinction to make is whether the symbols to be manipulated are based on discrete or continuous representations (signifier-signified relationships). For the analog case, imagine a mechanical device that is sensitive to some physical force, or an electronic circuit receiving an electric current with a certain voltage as its input. For most practical purposes (ignoring micro-level complications such as discrete electrons and quantum effects) we can say that these kinds of input vary smoothly over some range of values.
Devices that employ such continuous symbolic representations are called analog computers. An example is the (mechanical or electronic) differential analyser, used to accurately integrate systems of differential equations with real-valued parameters. Such analog computers can, in principle, solve problems which are not Turing-computable. For instance, they can find solutions for chaotic systems with infinite precision. Of course, such devices are limited in their accuracy in practice, and it is possible to approximate them using digital computers to an arbitrary degree of precision.
That’s one of the main reasons analog computers are a bit out of fashion these days. But the situation may change again, because of one form of analog computing that is particularly useful for machine learning, and is rapidly gaining popularity these days. This kind of analog symbol manipulation is called neuromorphic computing. Neuromorphic devices consist of artificial “neurons” with connections (called “synapses”) among each other that have some (real-valued) weight, indicating their respective strength. Such networks can “learn” to find solutions for some target task by smoothly adjusting and optimising their weights. They correspond to programmable hardware implementations of software-based artificial neural networks, which are heavily used in artificial intelligence research. Neuromorphic devices are a form of highly parallel analog computing that is not evidently algorithmic: information is processed in ways that are not transparent or sequential, and depend on the exact values of the connection weights.
A digital computer, in contrast, is a machine that operates with discrete representations. Most modern computing devices are digital or, to be more precise, binary. The only two symbols they recognise are 0 and 1. Numbers, text, and all kinds of other data and programming instructions are represented in binary code. Logical operations are then performed on coded strings of 0s and 1s.
The conceptual and practical advantages of digital computing are immediately evident from Turing’s model. Turing machines provide a precise definition of what we mean by “effective computation,” “machine,” and “algorithm,” as explained in detail in the first section of chapter 7. On top of all this, the model immediately suggests an engineering blueprint for real-world computing machines. In the case of analog computing, in contrast, we lack equivalent definitions, we cannot derive a well-defined set of computable functions, and we have no equivalent to the concept of computational universality, which means that analog computers have to be built to particular task-oriented specifications, which tends to make them less useful, versatile, and thus marketable in practice.
While digital algorithms are precisely circumscribed, we can interpret pretty much any physical process as an analog form of computation (see our example of the waterfall in the main text). Is it possible to say, then, that any physical system is carrying out some kind of computation? That computation, defined in its broadest possible sense, is nothing but physical change?
There is a wide range of opinions on this topic. We don’t quite see the point of renaming “physical dynamics” by “computation.” In fact, this conceptual conjurer’s trick seems unnecessarily confusing. Let us remember our original definition of computation as symbol manipulation. If any physical process computes just by changing, does that mean that any physical system performs symbol manipulation? How does this make sense? It is not immediately obvious, that’s for sure. But we’re not quite ready to address this problem yet. To find proper answers, we first have to deal with the relationship between computations and mechanisms, as well as the semantic aspects of symbol manipulation.
Mechanism
Often, people conflate the terms “algorithmic” and “mechanistic,” but the implied equivalence is problematic. In addition, we defined a “machine” to be some physical device whose functionality could be captured precisely on a Turing machine. Was that a reasonable move? We’re not yet sure. On top of all this, “machine” and “mechanism” are often used interchangeably. Is this okay? Or would we highlight the difference between these concepts? Are there any? And in what way does it matter?
Let us start with “mechanism.” Philosophers Bill Bechtel and Adele Abrahamsen have a definition that we like: they characterise a mechanism as a dynamic physical structure, which performs some function in virtue of its component parts, component operations, and how these are organised overall. The function of your bike, for example, is to transport you (to work, let’s say) at larger speed and across further distances than easily achievable by foot. The cycle, as a mechanism, manages this in virtue of its pedals, gears, wheels, steering bar, etc., and the organised interactions of these parts. Its orchestrated functioning manifests in patterns of change of its parts (the coordinated movement of pedals, wheels, …) which, in turn, are responsible for some phenomena or outcomes (i.e., the fact that you got to work on time).
A note of caution is necessary at this point: apart from being entities (or rather: processes) that exist in the physical world, we can also think of mechanisms as a type of explanation. We want to know how you got to work on time, and the bike explains this fact in a mechanistic manner. More specifically, mechanisms are a kind of local causal explanation, with interactions between parts stated in terms of cause and effect. We’ll have much more to say about the explanatory aspect of mechanisms in the main text. But here and now, let’s stick with mechanisms as dynamic physical structures.
Also: we should mention that mechanisms, as defined here, are not restricted to what we colloquially understand as mechanical devices (e.g., clockworks, made of interlocking metal gears). And it is broader than the conventional definition of a mechanism in biology, where it usually means a molecular mechanism. Philosopher Dan Nicholson has done a great job disentangling these different meanings of the term. Our take on the concept allows mechanisms to span many different levels of organisation, and includes pretty much any kind of material realisation — hydraulic, electric, electronic, even mechanisms made from immaterial interactions of electromagnetic or other force fields, or even people (as in “mechanisms of social change”). What is important here is that a mechanism is a physically actualised process that gets us from some initial configuration of its components to a particular outcome through causal interactions between the parts — whatever they are, really.
There are many evident parallels between mechanisms thus defined and algorithms. Both must have well-specified initial and end states, and both work according to a predefined set of fixed rules in an automated and reproducible manner. In addition, both mechanisms and algorithms serve specific purposes: they achieve some particular function — and reliably so.
But, as we have seen above, algorithms are bound by the rules of logic and exist exclusively in the symbolic realm, while mechanisms (interpreted as dynamic structures) are bound by what is physically possible because they exist in the physical realm. These two domains are not equivalent, and we spend much of the main text explaining why the difference really matters. Another important distinction to make is the following: algorithms are always discrete processes (implemented by the stepwise application of discrete operations), while mechanisms need not be. Thus, to summarise: mechanisms are not the same as algorithms. So do not confuse the two.
Despite all these differences, it is convenient to identify the notion of computation (broadly defined) with the mechanistic manipulation of symbolic representations. This includes both digital (Turing) and analog computing. But what does this restriction to mechanistic manipulations do?
They address the fact that physical mechanisms ought to go together with a logical explanation of a phenomenon. This immediately implies that the function of a mechanism can be captured or emulated accurately by an algorithm. It is true that a specific function could be achieved by processes that work through an illogical sequence of operations, like some chaotic or random processes. But this seems impractical as well as unreliable, since we would have no explanation why such a mechanism works the way it does in any particular situation, and it would not generalise or be reproducible in any straightforward way. In sum, although physically possible, we would probably not call such a process a mechanism.
To summarise: we don’t seem to be losing much by restricting our notion of mechanism to those physical processes that can be simulated by an algorithm. But this is not to say that non-computable physical processes don’t exist, just that we choose not to classify them as mechanisms. What we gain by this conceptual manoeuvre is a well-defined and easily identifiable class of mechanistic processes, which we will put to good use in later parts of the book.
In addition, it lets us equate machines with dynamical physical structures that are implemented by members of this well-defined set. Machine view, mechanistic view: the equivalence really does seem justified for our purposes. And if you think our definitions are too narrow, rest assured: their limitations are exactly what will prove useful further along our line or argumentation.
Often, people conflate the terms “algorithmic” and “mechanistic,” but the implied equivalence is problematic. In addition, we defined a “machine” to be some physical device whose functionality could be captured precisely on a Turing machine. Was that a reasonable move? We’re not yet sure. On top of all this, “machine” and “mechanism” are often used interchangeably. Is this okay? Or would we highlight the difference between these concepts? Are there any? And in what way does it matter?
Let us start with “mechanism.” Philosophers Bill Bechtel and Adele Abrahamsen have a definition that we like: they characterise a mechanism as a dynamic physical structure, which performs some function in virtue of its component parts, component operations, and how these are organised overall. The function of your bike, for example, is to transport you (to work, let’s say) at larger speed and across further distances than easily achievable by foot. The cycle, as a mechanism, manages this in virtue of its pedals, gears, wheels, steering bar, etc., and the organised interactions of these parts. Its orchestrated functioning manifests in patterns of change of its parts (the coordinated movement of pedals, wheels, …) which, in turn, are responsible for some phenomena or outcomes (i.e., the fact that you got to work on time).
A note of caution is necessary at this point: apart from being entities (or rather: processes) that exist in the physical world, we can also think of mechanisms as a type of explanation. We want to know how you got to work on time, and the bike explains this fact in a mechanistic manner. More specifically, mechanisms are a kind of local causal explanation, with interactions between parts stated in terms of cause and effect. We’ll have much more to say about the explanatory aspect of mechanisms in the main text. But here and now, let’s stick with mechanisms as dynamic physical structures.
Also: we should mention that mechanisms, as defined here, are not restricted to what we colloquially understand as mechanical devices (e.g., clockworks, made of interlocking metal gears). And it is broader than the conventional definition of a mechanism in biology, where it usually means a molecular mechanism. Philosopher Dan Nicholson has done a great job disentangling these different meanings of the term. Our take on the concept allows mechanisms to span many different levels of organisation, and includes pretty much any kind of material realisation — hydraulic, electric, electronic, even mechanisms made from immaterial interactions of electromagnetic or other force fields, or even people (as in “mechanisms of social change”). What is important here is that a mechanism is a physically actualised process that gets us from some initial configuration of its components to a particular outcome through causal interactions between the parts — whatever they are, really.
There are many evident parallels between mechanisms thus defined and algorithms. Both must have well-specified initial and end states, and both work according to a predefined set of fixed rules in an automated and reproducible manner. In addition, both mechanisms and algorithms serve specific purposes: they achieve some particular function — and reliably so.
But, as we have seen above, algorithms are bound by the rules of logic and exist exclusively in the symbolic realm, while mechanisms (interpreted as dynamic structures) are bound by what is physically possible because they exist in the physical realm. These two domains are not equivalent, and we spend much of the main text explaining why the difference really matters. Another important distinction to make is the following: algorithms are always discrete processes (implemented by the stepwise application of discrete operations), while mechanisms need not be. Thus, to summarise: mechanisms are not the same as algorithms. So do not confuse the two.
Despite all these differences, it is convenient to identify the notion of computation (broadly defined) with the mechanistic manipulation of symbolic representations. This includes both digital (Turing) and analog computing. But what does this restriction to mechanistic manipulations do?
They address the fact that physical mechanisms ought to go together with a logical explanation of a phenomenon. This immediately implies that the function of a mechanism can be captured or emulated accurately by an algorithm. It is true that a specific function could be achieved by processes that work through an illogical sequence of operations, like some chaotic or random processes. But this seems impractical as well as unreliable, since we would have no explanation why such a mechanism works the way it does in any particular situation, and it would not generalise or be reproducible in any straightforward way. In sum, although physically possible, we would probably not call such a process a mechanism.
To summarise: we don’t seem to be losing much by restricting our notion of mechanism to those physical processes that can be simulated by an algorithm. But this is not to say that non-computable physical processes don’t exist, just that we choose not to classify them as mechanisms. What we gain by this conceptual manoeuvre is a well-defined and easily identifiable class of mechanistic processes, which we will put to good use in later parts of the book.
In addition, it lets us equate machines with dynamical physical structures that are implemented by members of this well-defined set. Machine view, mechanistic view: the equivalence really does seem justified for our purposes. And if you think our definitions are too narrow, rest assured: their limitations are exactly what will prove useful further along our line or argumentation.
Semantics
So far, we have limited our discussion to symbolic processes (algorithms) and their actualisation in the form of particular symbol vehicles (the physical components of mechanisms, and their interactions). But we have not yet addressed the relation between signifier and signified: what is computation about?
In the human context, we usually compute in order to solve certain kinds of real-world problems. Computation and problem-solving are flipsides of the same coin, as we have already discussed in chapters 5 and 7. The problems we want to solve could be mathematical, they could be about optimising our schedule, or resource allocation, or we could be engaged in logically solving puzzles, like that famous fictional human computer, Sherlock Holmes. The main point is: human computation is always about some problem we have picked out of our particular large-world context. Computation only begins when we already have a well-defined problem to be solved in a formalised manner.
But then, what happens when we take computation out of our human context? What do we mean when we say a physical process computes, without any human being around to observe the process or its outcome? Does a tree falling in the woods still make a sound when nobody is around? Actually, yes. It does. But does a physical system — say, the waterfall from chapter 7 — still compute if nobody is around? Nope. It does not. At least we don’t think so.
A computation that is not used for any purpose by some intelligent being has no meaning at all. By definition, the syntactic process of sign manipulation is devoid of any semantic meaning. An algorithm is just a rote sequence of operations according to pregiven rules. The meaning only comes in through the semantic content, the real-world goal which we are trying to attain by computing.
This insight has all kinds of profound ramifications, which we discuss throughout the book: algorithms cannot think or understand in the same sense that intelligent beings do, machines cannot have a point of view or want anything, mechanisms cannot be true agents, and so on and so forth. But the point we want to focus on right now is another one: if the meaning of a computation is only present in a human context, what does it mean when we say “a physical process performs a computation?”
Maybe we’re just mistaken. We confuse the fact that we can simulate any mechanism to an arbitrary degree of precision (as we’ve shown above and in chapter 7) with the idea that the physical process itself computes? The two are clearly distinct, and the difference really matters. When we perform our simulation, we do so with a problem in mind that we want to solve. Perhaps we want to use the physical process we simulate for our own (benign or nefarious) ends? Or we want to understand how it works. Take the weather, for instance. Some people think we should be able to manipulate it to our own benefit. Eternal sunshine, light breeze, and moderate temperatures for everyone! Or something like that. We worry that this idea has not really been thought through. But even we would like to have more reliable weather forecasts. In fact, we demand them! We don’t like it either if we get rained on, snowed in, or scorched by relentless heat on our various outdoor adventures.
The thing is: when we simulate a physical process like the weather, it is not the physical process doing the computation. The clouds scudding across the sky are not performing any kind of calculation. They’re just being clouds. It is us who impose the computation on the process. We simulate the movement of clouds for our weather forecast. There is a special word for this: it’s called imputation. We impute the calculation on the process. The weather does not care or know about meteorology. The computation arises out of our purpose-driven interaction with the physical world, our effort to understand what is going on. It does not reside “out there” in the world — whatever that may mean.
At the risk of flogging a dead horse, we’ll repeat this central point one last time: the fact that we can accurately simulate a process does not mean the process is itself computing in any meaningful sense. The weather does not perform any calculations. We do, when we simulate it. This point seems bleedingly obvious to us, but a surprising number of people seem to have a surprisingly hard time grokking it. Simulation and reality, map and territory. It is really not that hard.
But then, you say, we can not only simulate a waterfall, but we can also use it to predict a game of chess. Although more cumbersome than simply using an electronic computer, we can coax various physical configurations into doing computational work for us. This is different from simulation. The symbolic process of computation is related to its semantic content by arbitrary conventions. There are many (perhaps infinitely many) possible relations. This way, even simple physical systems can achieve computational universality (in Turing’s sense), if we only manipulate them appropriately.
Considering that we can use so many physical processes for computation, does that not mean they are computational in some sense that is independent of us? If mechanisms can implement algorithms, aren’t they always computing in some sense, even if there is no human being around to observe them?
Yet again, we must ask: what would be the point of thinking about physics in terms of computation if we are not moulding the physical dynamics towards our own purposes? What is the point of thinking about physics in general in terms of computation? We can think about this in two ways.
On the one hand, we may want to claim that all physically possible processes are simulable by us limited human beings. We explain in the main text why there is no reason to assume this claim is true (and good reasons to believe it isn’t).
On the other hand, we may want to make an even stronger claim: that computation is fundamental — that the universe literally is some kind of computer. Doesn’t Wheeler have a point when he says below all matter and energy lies the simple binary distinction between existence and non-existence? Don’t Deutsch and Lloyd tell us that quantum physics and quantum computation are equivalent? Couldn’t it be true that the universe really is made up of information at its most fundamental level?
It should be fairly obvious that both of the above claims are driven by a desire to believe that we can completely understand, predict, and control our world. Computers, after all, are programmable machines and if reality is (like) a computer then we’ll one day be the masters of that universal machine. We argue in chapter 8 that this is a modern kind of salvation narrative that is becoming increasingly popular these days. But it is not a rational or coherent vision of the world.
The central problem is and remains that meaning and representation — the semantic aspects of computation — are intimately tied to some kind of limited intelligence performing the calculation. They cannot be fundamental in this sense, since they require the presence of intelligent agents. They are not a property of the territory, but are part of our map. It makes sense to see the computational approach as a good conceptual tool to probe the large world we live in. It is suited for simulating reality and to engineer physical processes to do our bidding. However, it is not useful to think of computation (or information) as fundamental ingredients of the universe. The world is not made of information!
In fact, neither is it made of matter and energy. Those concepts, too, are conceptual tools that serve us to make sense of our large world. They all represent different kinds of maps, and very useful ones at that! By understanding the relations between them, we can learn about the territory. But if we mistake them for aspects of the territory, if we reify them, we get stuck in our own map.
For this reason, the (pan)computationalist worldview is both confused and confusing. It is the ultimate expression of small-world thinking. The culmination of human hubris. It is but a cage, and a terribly small one at that. We must break this rusty cage and run! And our first step to freedom must be to stop reifying computation. Let’s ditch the computer metaphor. Let’s rage against the machines!
Computation is not physics, stupid. Just abstract signs. Not the territory. Just a map. And a powerful one at that. If we’d only learn to use it properly …
So far, we have limited our discussion to symbolic processes (algorithms) and their actualisation in the form of particular symbol vehicles (the physical components of mechanisms, and their interactions). But we have not yet addressed the relation between signifier and signified: what is computation about?
In the human context, we usually compute in order to solve certain kinds of real-world problems. Computation and problem-solving are flipsides of the same coin, as we have already discussed in chapters 5 and 7. The problems we want to solve could be mathematical, they could be about optimising our schedule, or resource allocation, or we could be engaged in logically solving puzzles, like that famous fictional human computer, Sherlock Holmes. The main point is: human computation is always about some problem we have picked out of our particular large-world context. Computation only begins when we already have a well-defined problem to be solved in a formalised manner.
But then, what happens when we take computation out of our human context? What do we mean when we say a physical process computes, without any human being around to observe the process or its outcome? Does a tree falling in the woods still make a sound when nobody is around? Actually, yes. It does. But does a physical system — say, the waterfall from chapter 7 — still compute if nobody is around? Nope. It does not. At least we don’t think so.
A computation that is not used for any purpose by some intelligent being has no meaning at all. By definition, the syntactic process of sign manipulation is devoid of any semantic meaning. An algorithm is just a rote sequence of operations according to pregiven rules. The meaning only comes in through the semantic content, the real-world goal which we are trying to attain by computing.
This insight has all kinds of profound ramifications, which we discuss throughout the book: algorithms cannot think or understand in the same sense that intelligent beings do, machines cannot have a point of view or want anything, mechanisms cannot be true agents, and so on and so forth. But the point we want to focus on right now is another one: if the meaning of a computation is only present in a human context, what does it mean when we say “a physical process performs a computation?”
Maybe we’re just mistaken. We confuse the fact that we can simulate any mechanism to an arbitrary degree of precision (as we’ve shown above and in chapter 7) with the idea that the physical process itself computes? The two are clearly distinct, and the difference really matters. When we perform our simulation, we do so with a problem in mind that we want to solve. Perhaps we want to use the physical process we simulate for our own (benign or nefarious) ends? Or we want to understand how it works. Take the weather, for instance. Some people think we should be able to manipulate it to our own benefit. Eternal sunshine, light breeze, and moderate temperatures for everyone! Or something like that. We worry that this idea has not really been thought through. But even we would like to have more reliable weather forecasts. In fact, we demand them! We don’t like it either if we get rained on, snowed in, or scorched by relentless heat on our various outdoor adventures.
The thing is: when we simulate a physical process like the weather, it is not the physical process doing the computation. The clouds scudding across the sky are not performing any kind of calculation. They’re just being clouds. It is us who impose the computation on the process. We simulate the movement of clouds for our weather forecast. There is a special word for this: it’s called imputation. We impute the calculation on the process. The weather does not care or know about meteorology. The computation arises out of our purpose-driven interaction with the physical world, our effort to understand what is going on. It does not reside “out there” in the world — whatever that may mean.
At the risk of flogging a dead horse, we’ll repeat this central point one last time: the fact that we can accurately simulate a process does not mean the process is itself computing in any meaningful sense. The weather does not perform any calculations. We do, when we simulate it. This point seems bleedingly obvious to us, but a surprising number of people seem to have a surprisingly hard time grokking it. Simulation and reality, map and territory. It is really not that hard.
But then, you say, we can not only simulate a waterfall, but we can also use it to predict a game of chess. Although more cumbersome than simply using an electronic computer, we can coax various physical configurations into doing computational work for us. This is different from simulation. The symbolic process of computation is related to its semantic content by arbitrary conventions. There are many (perhaps infinitely many) possible relations. This way, even simple physical systems can achieve computational universality (in Turing’s sense), if we only manipulate them appropriately.
Considering that we can use so many physical processes for computation, does that not mean they are computational in some sense that is independent of us? If mechanisms can implement algorithms, aren’t they always computing in some sense, even if there is no human being around to observe them?
Yet again, we must ask: what would be the point of thinking about physics in terms of computation if we are not moulding the physical dynamics towards our own purposes? What is the point of thinking about physics in general in terms of computation? We can think about this in two ways.
On the one hand, we may want to claim that all physically possible processes are simulable by us limited human beings. We explain in the main text why there is no reason to assume this claim is true (and good reasons to believe it isn’t).
On the other hand, we may want to make an even stronger claim: that computation is fundamental — that the universe literally is some kind of computer. Doesn’t Wheeler have a point when he says below all matter and energy lies the simple binary distinction between existence and non-existence? Don’t Deutsch and Lloyd tell us that quantum physics and quantum computation are equivalent? Couldn’t it be true that the universe really is made up of information at its most fundamental level?
It should be fairly obvious that both of the above claims are driven by a desire to believe that we can completely understand, predict, and control our world. Computers, after all, are programmable machines and if reality is (like) a computer then we’ll one day be the masters of that universal machine. We argue in chapter 8 that this is a modern kind of salvation narrative that is becoming increasingly popular these days. But it is not a rational or coherent vision of the world.
The central problem is and remains that meaning and representation — the semantic aspects of computation — are intimately tied to some kind of limited intelligence performing the calculation. They cannot be fundamental in this sense, since they require the presence of intelligent agents. They are not a property of the territory, but are part of our map. It makes sense to see the computational approach as a good conceptual tool to probe the large world we live in. It is suited for simulating reality and to engineer physical processes to do our bidding. However, it is not useful to think of computation (or information) as fundamental ingredients of the universe. The world is not made of information!
In fact, neither is it made of matter and energy. Those concepts, too, are conceptual tools that serve us to make sense of our large world. They all represent different kinds of maps, and very useful ones at that! By understanding the relations between them, we can learn about the territory. But if we mistake them for aspects of the territory, if we reify them, we get stuck in our own map.
For this reason, the (pan)computationalist worldview is both confused and confusing. It is the ultimate expression of small-world thinking. The culmination of human hubris. It is but a cage, and a terribly small one at that. We must break this rusty cage and run! And our first step to freedom must be to stop reifying computation. Let’s ditch the computer metaphor. Let’s rage against the machines!
Computation is not physics, stupid. Just abstract signs. Not the territory. Just a map. And a powerful one at that. If we’d only learn to use it properly …
Previous: Natural Philosophy
|
Next: Some Words About Set Theory
|
The authors acknowledge funding from the John Templeton Foundation (Project ID: 62581), and would like to thank the co-leader of the project, Prof. Tarja Knuuttila, and the Department of Philosophy at the University of Vienna for hosting the project of which this book is a central part.
Disclaimer: everything we write and present here is our own responsibility. All mistakes are ours, and not the funders’ or our hosts’ and collaborators'.
Disclaimer: everything we write and present here is our own responsibility. All mistakes are ours, and not the funders’ or our hosts’ and collaborators'.