© Ronald de Sousa
Philosophy, University of Toronto
Toronto, M5S 1A1
If AL is to biology what AI is to psychology, we should expect to find an interesting array of analogies and differences. One analogy has often been noted: like AI, AL's ambitions might be either strong or weak (Sober). The strong aspiration is actually to create life; the weak, merely to study life with models and simulations. The word `artificial' fits equally well either way: artificial flowers are not flowers, but artificial light is real light.
The usual objection to strong AI is one that has also been adduced against strong AL: it is that simulations is the most one can hope for unless one is able to produce the very matter of life. Rainstorms simulated in a computer have neither real wind nor real rain. How then could computers promise more than mere simulations of intelligence or life?
The responses available to Al and to AL are not the same. One disanalogy favours AL, the other favours AI.
On the one hand, AL has no analogue to the problems of intentionality and consciousness. Many people are sceptical of the validity of the Turing test for intelligence in a machine: it seems at least logically possible that a machine might seem to behave in such a way as to convince observers that it was really thinking, when in fact its responses were either "canned'', or just the outcome of purely syntactic processes lacking intrinsic intentionality (Searle 1980). Either way, the machine would then have neither consciousness nor intentionality, and so would not embody any real intelligence or mentality. But this problem does not seem seriously to threaten AL. If we could agree on an objective test for life, it is hard to imagine someone seriously complaining that something might pass it, and yet lack essential `vitality'. One might, of course, be fooled for a time by a machine that appeared to be alive, but turned out not to be. But this merely means that an appropriate test must require a sufficiently long period of observation. It is not, as it has seemed to some in the case of AI, that something could behave as if it were alive for an indefinite period of time, and yet not be alive.
On the other hand, there is one possible way out of the problem for AI that is not obviously available for AL. It consists in claiming that the matter of thought is information. If that is so, then information, rightly manipulated by an artificial system, might qualify as genuinely intelligent. Such comfort, however, is not available to AL. It is not equally plausible to claim that the stuff of life is information. So strong AL, on the basis of this consideration, seems less likely to be achieved than strong AI.
The aim of this paper is to articulate and draw out some implications, of the intuition that information cannot count as the matter of life. The reason for this, I shall argue, is that whatever is alive must be an individual.
Temporality and Externalism
After a few hundred years of thinking of time as just a slightly peculiar quasi-spatial dimension, biology is the one science that has taught us to take time seriously. This seems pressed on us by the irreversibility of evolution. And evolution is often cited as an essential feature of living things (Mayer, cited by Taylor). So it wouldn't be surprising to find the dimension of time turning out to be crucial in discussions of AL.
Yet some have claimed that the core meaning of life is not reproduction or evolution, but metabolism, and that the judgment of whether something meets the criteria for being alive can therefore be made independently of any consideration of an extended temporal dimension. Metabolism also occurs in time, of course; but its time scale is, as it were, merely local. It does not require to be thought of as part of an all-inclusive temporal dimension. Outside the context of an ecology where there is interaction and evolution, the metabolism of one organism need bear no relation to that of any other. Metabolic time relates to the inclusive temporality of evolution as a Leibnizian monad relates to Newtonian absolute space. A monad contains in itself everything that is relevant to its identity. An item in real evolutionary time, by contrast, may require for its very identification some reference to its history in absolute time.
These ideas suggest another possible parallel with cognitive science: is there, in AL, a problem about `externalism'?
In cognitive science, Externalism is the view first expressed in Putnam's phrase, that "meanings ain't in the head''(Putnam; Burge). We fail to have full access, let alone privileged access, to the content of our own mental states, because the identity of these contents depend in part on their reference, and that in turn is fixed by a linguistic community, not by anything locatable in myself alone. Meaning and content depend on a whole system that includes the individual only as a functioning part. The literature on this subject abounds with ingenious cases, but one could make a more limited point by reference to an obvious fact of common sense: `Remembering that p' is a mental state, but it is not one that I can be sure of being in, since I am only really remembering that p if I experienced p (or at least if p is true). I have no privileged access to that fact, since p lies outside of my present consciousness. Systems theorists, from William Bateson on, have often told us that we must not take the skin too seriously. The processes that we are interested in -- action, thought, behaviour -- are essentially interactive ones that depend on more than what happens inside my own skin. The analogue in the case of AL might be the claim that whether a particular individual is or is not alive depends on more than can be discovered by inspection of that individual in isolation. (1)
In order to set up this argument, however, it seems we need to set up some preliminary notion of an individual's boundaries -- if only to dissolve them. In the case of mental externalism, do we not need to start with some notion of the inner / outer of consciousness, or first person awareness? For the action, thought, or behaviour that interest us are first identified from the point of view of the organism itself as we understand it preanalytically. And that generally means that it is identified first in a quasi Cartesian way, as just what I AM, THIS THINKING THING. Otherwise, there seems to be nothing to define the unity that we are talking about -- whether to assert or deny that it is contained within the skin. If there is an argument about the relative importance of what occurs on either side of some membrane, one needs at least a preliminary answer to the question: what is the individual thing or process in relation to which the question is being asked? What, then, is the relevant sense of individuality?
Part of the answer lies in teleology. A biological individual is something that in some sense actively maintains itself as a unity. Hence there must be some mechanism for detecting what is and what is not part of the individual in question. The immune system performs this function in animals, not always perfectly. Sometimes it makes "mistakes''; but the mistakes obviously don't consist in any violations of the laws of physics or chemistry -- or even of biology -- if there are any laws in biology. (Mohan and Levy) They can only be identified as mistakes on the basis of some prior notion of the unity of the particular.
There are borderline cases. On the theory made famous by Lynn Margulis, "RNA is the most ancient and the most incurable of our parasitic diseases'' (Dyson, p. 15). Eukaryotic cells are constituted of organisms that came together at some early stage in evolution, but which originally belonged to entirely different forms of life. Now at the beginning of the story told by Margulis, supposing it is true, there would be cells invaded by parasites: two organisms in conflict. At the end of the story, there is one single cell -- actually now perhaps a part of yet another individual. But what happened in between? How is each intermediate stage to be described in terms of the individuality of the components? It seems that the answer cannot, at every stage, be given on the basis of strictly synchronic criteria. The first time we label something: `one of the first eukaryotic cells', we must do so in retrospect: it is only because of the subsequent history of cells that what were once host and parasite now constitute a single organism.
This is just the parallel with temporal externalism in cognitive science. On some views, "Swampman'', an organism corresponding molecule for molecule with a thinking living being, but arising out of some random process by some fantastic chance, would not have real thoughts, really say things, etc, even though it seemed to behave exactly as if it did (Davidson, Millikan). Soon after, however, having become causally connected in the right way with the objects of its environment, its states can be recognized -- possibly even retrospectively -- as genuinely cognitive. Similarly, I have suggested, some organisms may count as single individuals only in retrospect. Just as the status of a mental content cannot be determined in the absence of knowledge about its history, linguistic context, and causal antecedents, so it may be that the unity of a living thing cannot be fully understood independently of the temporal dimension.
One class of device for which strong claims have been made are those that instantiate a kind of metabolism or autopoiesis. Autopoiesis has been claimed to be both necessary and sufficient for life; in addition, autopoietic systems can be specified without any reference to the temporal dimension (cf Fleischaker 1988,1990, Varela and Maturana 1974). In the remainder of this paper, I wish to examine these claims.
Living cells constitute the paradigm case of natural autopoietic systems (Varela et al., 188). In the domain of Artificial life, autopoietic systems can be readily realized in Cellular Automata. A Cellular Automaton is essentially a collection of cells in a virtual space (typically represented on a computer screen), each of which can be in any of a finite number of states. At every cycle of a universally ticking clock, the state of each cell is revised according to rules that refer exclusively to its own state and the state of its neighbours. As in the case of Turing machines, this very simple characterization turns out to be compatible with a kind of device able to simulate indefinitely complex behaviour; in fact, CAs have been shown to be equivalent to Turing machines (Langton 1986). Conway's "Game of Life'' is the most popularly known of these devices (Gardner 1970). Its most intriguing feature is its capacity to produce, on the basis of rules that govern only local interactions, patterns that have the look of having been globally designed. These patterns sometimes have a stability and an integrity that prompts us to give them names: `glidergun', `twizzler', `robotman' (Rucker) and tempts us to treat them as if they were individuals with their own principles of unity and integrity. An autopoietic system is one such type of pattern, meeting the following especially stringent criteria of autonomy:
SELF-BOUNDED -- THE ENTITY has an interior and a boundary constituted by discrete components,
SELF-GENERATING -- ALL COMPONENTS, both interior and boundary, are produced by component transformations, and
SELF-PERPETUATING - ALL COMPONENT TRANSFORMATIONS are determined by relationships among component properties. (Fleischaker 1990, 130).
the specific molecules that are incorporated in the system determine the system organization which generates pathways whose operation produces molecular structures, structures that embody the physical system (the metabolic `self') and determine the system organization which generates pathways, et cetera ad infinitum. (Fleischaker 1990, 129-130)Such systems are claimed to have features not only necessary but sufficient for life. (Varela & Maturana 189) Yet they can be considered in abstraction from their origins, from replication or reproduction, or from evolution:
To say that autopoiesis is an operational definition of the living means it is a judgment rendered now at this very moment. Thus there is no role for implied future states (e.g. capacity `to grow', capability `to reproduce') in the autopoietic definition ... there is no empirical basis for including future states: metabolising cells, not at the moment expanding or multiplying, are living systems; sterile hybrids and grandmothers, at the moment incapable of reproducing, are nonetheless living! (Fleischaker 1990, 131)Whether the basic fact of life is metabolism or reproduction may be thought to be merely a definitional matter (Dyson). On it depends such questions as whether viruses are alive or not, but on THAT question nothing at all hangs, I suppose -- unless one is some sort of maniacal defender of the rights of all living things, and is wondering whether that has to include a benevolent attitude towards viruses.
Nevertheless, I believe the opposing intuition is worth articulating:
Fleischaker and Varela & Maturana are categorical, as we saw, about the primacy of the criterion of autopoiesis over that of reproduction and evolution. This, as I argued at the outset, amounts to declaring the irrelevance of time as a comprehensive dimension. About the irrelevance of materiality, our authors are more ambivalent:
Functionalism in AL
I propose to approach this question somewhat indirectly, by first returning to a question suggested by the parallel between AL and AI. To what extent might we be functionalists in AL? In the second Langton volume, Elliott Sober has suggested that this question is, for AI, an empirical one:
The parallel can be pushed. Just as Platonism, in general, is the view that the locus of reality is outside the physical altogether, so functionalism in its pure form insists that mental properties just are functional ones. Like Platonism about physical properties, pure functionalism is implausible. Hence the attraction of the Lewis form of functionalism, which is a kind of identity theory with `something, whatever it may turn out to be'', that is a material thing or state, which fulfills certain functional conditions. (Lewis 1972) This type of functionalism is the only one that preserves the material particularity of the thing to which we are attributing a mind.
How physical are autopoietic systems?
Autopoietic systems live on computer screens. Or do they? Perhaps they live in computer memories, or in the whole circuitry that controls what happens on the screen. (What if the monitor is turned off?) If living things are individuals, then we should be able to answer the question: What is the principle of individuation and reidentification for autopoietic systems?
I said earlier that the notion of individual includes three notion: unity, particularity, and uniqueness. Autopoietic systems are especially successful at achieving unity: there is a clear answer to the question of what is and what is not part of the system, and its unity is guaranteed by the cycle of production and control that defines it. But particularity and uniqueness are more problematic. In the case of most of the actual (physical) living systems with which we are familiar, uniqueness and particularity are dissociable only in principle: Leibniz's conviction that there could never be two identical leaves may have been metaphysically intemperate, but it was biologically plausible. Even clones are only genetically identical, and can differ as individuals. In any case, they will differ in their matter. But the case is not so clear for autopoietic cellular automata.
For consider the model presented in Varela & Maturana. Suppose we run it several times, starting from the same configuration. The program, being deterministic, will develop in the same way every time. Are we then dealing with one or with several autopoietic individuals? It is tempting to borrow some terminology from linguistics and say: we are dealing with different tokens of the same type. But there seems to be important differences between the relation of type and token and the relation of individual to species or kind. One is that the relation of token to type seems to depend on an interpreter: someone is needed to decide that something is a token of a given type, according to more or less objective criteria. The other is that tokens are not causally and materially related to types as individuals are causally and materially related to species. (2)
Individuals and Species
Although individuals are unique, they are also typically members of species or kinds. Species formed the most obvious model for the notion of natural kinds, but in a post-Darwinian perspective species can no longer be thought of as themselves real natural kinds: they are, instead, extended individuals of which parts are related in lineages. (de Sousa 1984) So, curiously enough, there are no natural kinds in biology. (Whether there are real natural kinds among the objects studied by physics and chemistry, is controversial. (de Sousa 1989))
On this view, it is easy to see understand the two disanalogies just mentioned between the type/token and the genus/species distinctions.
First, if membership in a species is at least in part a matter of issuing in the right way from the right progenitors, then the way that we decide that something is a member of a species is not observer-relative. Whether the individual in question does or doesn't have these progenitors is not a matter of interpretation: either it does or it doesn't belong to the right lineage.
Second, autopoietic systems implemented in the form of Cellular Automata present no obvious analogy with the causal-temporal relations that define membership in a species. But perhaps it is mere prejudice that a living thing should belong to a species. Artificial objects are artifacts: should they not have the same conditions of individuation as other artifacts? This may seem more straighforward (as well as somewhat arbitrary). Artifacts are generally manufactured to serve a definite purpose. If so, then functionalism seems `natural' in the case of artifacts: we might say that any two objects belong to the same natural kind if they are functionally identical, that is, if they run the same program.
But do they have to run it in the same way? Again, the example of natural living things is of little guidance: we certainly wouldn't require (unless we were Derek Parfit) that a person or an animal no longer count as the same if it starts "running its program'' in a novel way -- behaving in unexpected ways, or even using different methods in different circumstances to attain the same general ends. The analogue of this in the case of a program running an autopoietic system would seem to be: if a given program runs twice, but forms distinct patterns because it has been fed different initializing conditions, then is it still the same program?
The answer is obviously Yes, but maybe that was the wrong question, based on the wrong analogy. The same program is something like the analogue of the same genome: but we are not tempted to say that two clones are the same thing just because they share a genome. Once again, like Sober in the passage quoted above, we have allowed ourselves to be tipped into a category mistake: if we are to take seriously the idea that we are dealing with individuals, we need to identify the matter in which they have their being.
But what is that? If it is the computer on which someone is running the program, then it seems we might have several tokens of the same type even within a single material substrate, since the same program can run at different times on the same machine. Besides, there are several machines of exactly the same type and model that could be running the program. So matter, which since Aristotle we are used to thinking of as the ground of the individuality of different members of the same kind, actually fails to ground that distinction in this case. Or rather, it may require us to count as being identical what common-sense would describe as different instances: what happens on different occasions when I run the same program, whether or not the initial configuration is the same, on the same or different computers. If that is the way we should think of individuality, then it seems that what is to count as the matter is entirely dependent on what we agree to count as running the same program "in the same way.'' How, for example, are we to count individuals when a given program is being run on a computer attached to several monitors? And where is, or are, the individuals in question?(3) It is a flimsy sort of matter that has this much arbitrariness about its very identification.
We require, in a genuine individual, a kind of solidity that is grounded in matter. One mark of this solidity is captured by what we might call the Modal Test: a true individual must be one to which we can refer with rigid designators. We can ask questions about possibilities, that is to say, about what is possible for it; we can take seriously the hypothesis of its existence in other possible worlds. (4) How, then, would we apply rigid designators to autopoietic systems? It seems there is not even any intuition that we can have about how this autopoietic structure would have behaved if things had been different. For if this structure is characterized exclusively as a type, then `if things had been different' must mean either: if it had behaved differently (in which case it would not have been this one at all) or if anything else had been different, in which case it would be, by definition, just another avatar of the same individual. There is no individual there with the robustness required to support counterfactual intuitions.
Another sort of matter?
The argument I have been making concerning autopoietic devices does not apply indiscriminately to every putative artificial life form. The case is quite different, for example, with computer viruses. For since the latter, unlike the former, are capable of reproduction, we can think of their different avatars in different computers as genuine analogues of individuals that are part of a lineage. In this way, then, the issue of the importance of reproduction to life turns out to be more than merely a verbal matter: for it seems we can devise criteria of (re-)individuation to fit those putative AL forms that reproduce, but not those that do not.
Perhaps, however, I am merely being crude in assuming that the matter in which our autopoietic devices move-and-have-their-being is some sort of ordinary matter. Langton, in his discussion of "Life on the edge of chaos'' (1992), raises an intriguingly different alternative. He speaks of his CA experiments as being concerned with "programmable matter'' (a phrase he credits to Toffoli 1987):
Individuals and the biosphere
There is one final consideration in favour of the ineliminability of the temporal dimension. The oldest truism about life is that it is complex and organized. Its organization is an actual product of evolution. That this should be so is no logical necessity, to be sure: the story of special creation is biologically absurd, but it is not logically impossible. Nevertheless, theological hypotheses apart, it is difficult to imagine how organized complexity could ever have come about without evolution by natural selection. In turn, evolution by natural selection is inconceivable without the special relationship that exists between individuals and the groups -- species, population, ecological environment -- of which they are a part. The Leibnizian connection I adverted to earlier -- the fact that uniqueness and particularity, by and large, go hand in hand in practice -- is essential to the variation in natural possibilities on which the development of more elaborate forms necessarily rests. A designed artifact could, of course, bypass this process. But if it did not result in the existence of individuals, it seems that it could then go no further: it could not take its place in an ecological space in which the interplay of types and individuals gave rise to further variations, and to the further appearance of novel individuals and types.
Since this discussion concerns thought experiments, albeit computer aided ones, I have had little recourse but to appeal to intuitions. Thought experiments, after all, can reveal nothing about the real world: they only tell us something about the state of our own concepts. This much, at most, is all that can be claimed for the considerations I have adduced to show that autopoietic systems should not be counted to be alive because they are not true individuals: our concept of life, or at least mine, includes a distinction between the concrete individuals that exist uniquely in space-time, and the functions, capacities or properties that distinguish them from non living things.
But how arbitrary, or how pointless, is such speculation? What are the guidelines for reasonable speculation about how we might define things? It's not the case that just anything goes; there are constraints, but it's not exactly clear where they come from. They aren't exactly empirical, since we are discussing possible worlds rather than actual ones. (On the other hand, empirical evidence constrains possibilities.) The philosophy of AL demands, perhaps, a sort of judicious myopia: if our sight in looking at RL (Real Life) is too keen, we will be incapable of the requisite degree of abstraction. But if, on the other hand, it is too blurry, we shall see nothing at all and be condemned to playing with only those toys that we can hold very close to our noses.
Bateson, G. (1972). Steps to an ecology of mind. New York: Ballantine.
Bedau, M. A., & Packard, N. H. (1992). Measurement of evolutionary activity, teleology, and life. In C. G. Langton, C. Taylor, J. D. Farmer, & S. Rasmussen (Eds.), [Proceedings of the Workshop on Artificial Life Held February, 1990 in Santa Fe, NM]Artificial Life II (pp. 431-61). Santa Fe Institute Studies in the sciences of complexity. Redwood City, CA: Addison-Wesley.
Davidson, D. (1989). Knowing one's own mind. APA Proceedings.
de Sousa, R. (1984). The natural shiftiness of Natural Kinds. Canadian Journal of Philosophy, 14, 561-80.
de Sousa, R. (1989). Kinds of kinds: Individuality and biological species. International studies in the philosophy of science, 3(2), 119-35.
Dennett, D. C. (1978). Where am I? [Ch. 17]. In Brainstorms. Montgomery, VT: Bradford Books.
Dyson, F. (1985). Origins of Life [Tarner Lectures]]. Cambridge: Cambridge University Press.
Fleischaker, G. R. (1988). Autopoiesis: The status of its system logic. BioSystems, 22, 37-9.
Fleischaker, G. R. (1990). Origins of life: An operational definition. Origins of life and evolution of the biosphere, 20, 127-37.
Garder, M. (1970). The fantastic combinations of John Conway's new solitaire game 'Life' Scientific American, 223(4), 120-23.
Hume, D. (1978 <1888>). A treatise of human nature (L. A. Selby-Bigge, Ed.) (P. H. Nidditch, Revised by & notes by) (2nd ed.). Oxford: Oxford University Press, Clarendon.
Langton, C. G. (1986). Studying artificial life with cellular automata. Physica D, 10(1-2), 120-49.
Langton, C. G. (Ed.). (1992). Life on the edge of chaos. In Artificial Life II (C. Taylor, Co-ed.) (J. D. Farmer, Co-ed.) (S. Rasmussen, Co-ed.). Santa Fe Institute Studies in the sciences of complexity. Redwood City, CA: Addison-Wesley.
Lewis, D. (1972). Psychophysical and theoretical identifications. Australasian journal of Philosophy, 50, 249-58. reprinted in Block, N., ed., Readings in Philosophy of Psychology (1980)
Lewis, D. (1988). On the plurality of worlds. Oxford,: Blackwell's. reprinted in Block, N., ed., Readings in Philosophy of Psychology (1980)
Margulis, L. (1981). Symbiosis in Cell Evolution: Life and its environment on the early earth. San Francisco: W.H. Freeman.
Matthen, M. A. L., Edwin. (1984, July). Teleoogy, error, and the human immune system. Journal of philosophy, 81(7), 351-72.
Mayr, E. (1982). The growth of biological thought: Diversity, evolution, and inheritance. Cambridge, MA: Harvard University Press, Belknap.
Millikan, R. (1984). Language, thought, and other biological categories. Cambridge, MA: MIT Press <A Bradford Book>.
Putnam, H. (1975). The meaning of meaning. In Mind, language and reality [Philosophical Papers] (Vol. 2, p. chapter 12). Cambridge: Cambridge University Press.
Putnam, H. (1978). Meaning and the moral sciences. International Library of Philosophy and Scientific Method. Boston: Routledge and Kegan Paul.
Rucker, R. (1989). CA Lab: Rudy Rucker's Cellular automata laboratory (John Walker, With an essay by). Sausalito, CA: Autodesk Inc. software included
Searle, J. R. (1980). Minds, Brains and Programs. Behavioral and Brain Science, 3, 417-57.
Sober, E. (1992). Learning from functionalism -- prospects for strong artificial life In C. Langton et al., Eds., Artificial Life II. Santa Fe Institute Studies in the sciences of complexity. Redwood City, CA: Addison Wesley.
Strawson, P. F. (1963). Individuals: An essay in descriptive metaphysics. Garden City, New York: Doubleday, Anchor.
Taylor, C. E. (1991). "Fleshing out" artificial life II. In Artificial Life [Proceedings of the Workshop on Artificial Life Held February, 1990 in Santa Fe, NM] (C. Taylor, J. D. Farmer, & S. Rasmussen, Co-editors). Santa Fe Institute Studies in the sciences of complexity. Santa Fe: Addison-Wesley.
Toffoli, T., & Margolus, N. (1987). Cellular automata machines. Cambridge, MA: MIT Press.
Varela, F. J., Maturana, H. R., & Uribe, R. (1974). Autopoiesis: The organization of living systems. BioSystems, 5, 187-96.
1 . An extreme claim to this effect has been made by Bedau and Packard: `No single molecule of gas has a macroscopic property like temperature; temperature is meaningful only for large populations of molecules. . . . From a global perspective, only the complex web of interacting organisms -- the entire biosphere -- remains "alive'' in the long run. . . . We believe it is fruitful, theoretically and experimentally, to link the notions of an individual's life to the vitality of the global system in which the individual lives. (Bedau and Packard p. 457)
2 . This problem is anticipated in Hume's remark that sounds are not suited for be treated as individuals (Hume, Treatise, section on Personal Identity) and nicely expounded in Strawson's Individuals. Sounds are intriguing in this regard because they illustrate the fact that whether or not a category of thing can or cannot be a proper individual does NOT depend solely on whether it is a purely `abstract' concept.
3 . Daniel Dennett has written a science fiction version of this problem in the very funny `Where Am I', in which the subject is imagined to have a body manipulated by a brain many miles away.
4 . Whether these are interpreted realistically, as in the view of David Lewis, or merely as models, as in Kripke or van Fraassen.
BACK TO Ronnie's Home Page