UNIVERSITY OF TORONTO
DEPARTMENT OF ENGLISH

ENG 6900Y: Cybertext

Course Description

Overview

Cybertext studies modes of discourse in which computer technology co-acts with cyborgs, whether author, critical reader or analyst, or editor. The compound `cyber-text' (Greek, 'to steer'-'to weave') relates to cybernetics, which studies how the human mind and nervous system on the one hand, and mechanical-computational communication and control systems on the other, make things. Cybertext makers are cyborgs (`cybernetic-organisms'), that is, authors and reader-interpreters who use cybernetic tools to compose and to understand. Self-sufficient artificial intelligences who make and control things, independent of people, are robots or cymacs. Over the past two decades, they also have begun to make cybertexts. This course will introduce you to the theory, applications, and fiction of and by cybertext-making cyborgs and cymacs.

          If cyborgs are extended humans, their cybertexts are also, in McLuhan's phrase, "extensions of man." Theoreticians of cyborg literature include McLuhan (1964), Weizenbaum (1976), Turkle (1984), Haraway (1990), Bolter (1991), Postman (1992), Poster (1995), Aarsneth (1999), and Hayles (1999). These critics often focus on questions of identity and morality. Fiction about cyborgs tends to be cyberpunk and postmodernist: Dick's Do Androids Dream of Electric Sheep? (1964), Gibson's Neuromancer (1984), and Stephenson's Snow Crash (1992) are good examples. Fiction by cyborgs -- man-machine partners -- begins with conversationalists like Eliza and Parry, and computer poets such as Racter. Interactive fiction follows, starting with Storyspace pioneers like Michael Joyce, Stuart Moulthrop, and Shelley Jackson. Cyborg applications to criticism, philology, and textual bibliography have a longer history. Beginning in the 1960s, text analysis (recently corpus linguistics) uses text-retrieval and statistical software to assist criticism. They help document the structure and meaning of texts (e.g., Smith's and Theall's studies of Joyce), the style and memory of authors in texts (Mosteller and Wallace, Milic, Burrows, Foster, and Lancashire), and the language of diachronic and synchronic collectives (Cluett, the ICAME group, Sinclair, and Biber). The second largest cyborg application is textual cyberspaces. Initially hypertexts and lately hypermedias (with image, sound, and film), these e-editions are of two broad kinds: constrained, single-author or subject gardens; and unbounded Web docuverses. Bush, Nelson, and Landow created memex-hypertexts as unconstrained, multidimensional spaces in which everything can be theoretically linked to everything else, whether by authors or by readers. McGann, Crane, Robinson, and many others cultivated Web gardens devoted to the likes of Rossettis, ancient Greek civilization, and Chaucer.

          Cymacs replace rather than extend humanity. The theory of AI is voluminous, but Von Neumann (1954), Boden (1977), Moravec (1988), and Kurzweil (1998) are representative. Story-telling by programs has made some progress in the past few decades, as BRUTUS (2000) and Turner (1994) show. Fiction about cymacs includes William Gibson's and Bruce Sterling's The Difference Engine (1991) and Marge Piercy's He, She, and It (1991). Cymac applications in cybertext, dominantly question-answering and machine translation, can be seen in the Web-based Google (1998), Babel Fish (1999), and Ananova (2000). Neural-network computing has achieved results suggestive of the growth of independent intelligences.

          To know cybertext well, we must practice it ourselves. We cannot become a cymac, but we can become cyborgs. This course, then, introduces you to some technologies of cybertext. Graeme Kennedy's book on corpus linguistics explains the basics on how to make and apply a corpus. Using TACT (1996) describes how to apply interactive concordances to close readings. Storyspace is an engine for making interactive fictions and hypertexts. Dreamweaver and XMetaL are programs for creating HTML and XML Web pages.

          This course will take a chronological perspective on cybertext. Since the 1960s' work of Norbert Weiner, cybernetics hypothesizes that human beings and their complex machines interact so intimately that they become alike. Both self-regulate, for example. When computers emerged fully in World War II as decryption tools, they appeared superficially human, acquiring rudimentary language skills. Once computer languages like Fortran (`for[mula] tran[slation]', 1956) developed in the 1950s, people learned artificial languages to communicate with machines. Artificial intelligence (AI) followed in the 1960s. It focused on machine translation of human languages: this creates automata that convert texts from a natural language, the source, into another natural language, the target, often through an intermediary artificial language. After initial (drastic) failure, machine translation systems like Systran became successful language industries by the late 1980s. Neural-network computing, discovered about that time, demonstrated that computers could learn if they adopted non-chemical procedures that resemble the chemical ones that the human brain uses. This advanced the main goal of cybernetics, to show that human beings and machines are fundamentally alike.

Conduct of Course

Lectures, seminar papers, discussion, and workshops.

Students need have no special computer expertise, beyond word-wording and some familiarity with the Web, to take this course.

Recommended Readings

Course authors for discussion will be selected from the following, in this approximate order. Convenient anthologies of criticism include Cyberspace Textuality: Computer Technology and Literary Theory, ed. Marie-Laure Ryan (Indiana University Press, 1999) and The Literary Text in the Digital Age, ed. Richard J. Finneran (University of Michigan Press, 1996). The first is on short-term loan in the Library.

We will have workshops or how-to sessions on tools for cybertext applications. Two useful books on these are Graeme Kennedy, An Introduction to Corpus Linguistics (Longman, 1998), and Ian Lancashire, John Bradley, Willard McCarty, Russon Wooldridge, and Michael Stairs, Using TACT with Electronic Texts (MLA, 1996). Both are on short-term loan in the Library. Also recommended are Shillingsburg (1996), Robinson (1993), Sinclair (1991), Potter (1989), and Hockey (1985).

Gateways to on-line theory, applications, and literature are many. I recommend the E-literature site, the Google search engine, Berkeley's Current Cites, Alan Liu's Voice of the Shuttle, the Open Source Initiative, and Postmodern Culture.

Requirements

Each student will be expected to

Students may have, without special permission, one extension for either the essay or the application -- until May 15, 2001.

Select Bibliography

(c) Ian Lancashire 2000