Read 1989 - Seeing Voices Page 12


  90. See Carina, 1989.

  Other parts of the body may also be involved. Any or all of this—this vast range of actual or potential inflections, spatial and kinetic—can converge upon the root signs, fuse with them, and modify them, compacting an enormous amount of information into the resulting signs.

  It is the compression of these sign units, and the fact that all their modifications are spatial, that makes Sign, at the obvious and visible level, completely unlike any spoken language, and which, in part, prevented it from being seen as a language at all. But it is precisely this, along with its unique spatial syntax and grammar, which marks Sign as a true language—albeit a completely novel one, out of the evolutionary mainstream of all spoken languages, a unique evolutionary alternative. (And, in away, a completely surprising one, considering we have become specialized for speech in the last half million or two million years. The potentials for language are in us all—this is easy to understand. But that the potentials are a visual language mode should also be so great—this is astonishing, and would hardly be anticipated if visual language did not actually occur. But, equally, it might be said that making signs and gestures, albeit without complex linguistic structure, goes back to our remote, pre-human past—and that speech is really the evolutionary newcomer; a highly successful newcomer which could replace the hands, freeing them for other, non-communicational purposes. Perhaps, indeed, there have been two parallel evolutionary streams for spoken and signed forms of language: this is suggested by the work of certain anthropologists, who have shown the co-existence of spoken and signed languages in some primitive tribes. 91

  91. See Levy-Bruhl, 1966.

  Thus the deaf, and their language, show us not only the plasticity but the latent potentials of the nervous system.)

  The single most remarkable feature of Sign—that which distinguishes it from all other languages and mental activities—is its unique linguistic use of space. 92 The complexity of this linguistic space is quite overwhelming for the ‘normal’ eye, which cannot see, let alone understand, the sheer intricacy of its spatial patterns.

  92. Since most research on Sign at present takes place in the United States, most of the findings relate to American Sign Language, although others (Danish, Chinese, Russian, British) are also being investigated. But there is no reason to suppose these are peculiar to ASL—they probably apply to the entire class of visuo-spatial languages.

  We see then, in Sign, at every level—lexical, grammatical, syntactic—a linguistic use of space: a use that is amazingly complex, for much of what occurs linearly, sequentially, temporally in speech, becomes simultaneous, concurrent, multileveled in Sign. The ‘surface’ of Sign may appear simple to the eye, like that of gesture or mime, but one soon finds that this is an illusion, and what looks so simple is extraordinarily complex and consists of innumerable spatial patterns nested, three-dimensionally, in each other. 93

  93. As one learns Sign, or as the eye becomes attuned to it, it is seen to be fundamentally different in character from gesture, and is no longer to be confused with it for a moment. I found the distinction particularly striking on a recent visit to Italy, for Italian gesture (as everyone knows) is large and exuberant and operatic, whereas Italian Sign is strictly constrained within a conventional signing space, and strictly constrained by all the lexical and grammatical rules of a signed language, and not in the least ‘Italianate’ in quality: the difference between the para-language of gesture and the actual language of Sign is evident here, instantly, to the untutored eye.

  The marvel of this spatial grammar, of the linguistic use of space, engrossed Sign researchers in the 1970’s, and it is only in the present decade that equal attention has been paid to time. Although it was recognized earlier that there was sequential organization within signs, this was regarded as phonologically unimportant, basically because it could not be ‘read.’ It has required the insights of a new generation of linguists—linguists who are themselves often deaf, or native users of Sign, who can analyze its refinements from their own experience of it, from ‘within’—to bring out the importance of such sequences within (and between) signs. The Supalla brothers, Ted and Sam, among others have been pioneers here. Thus, in a groundbreaking 1978 paper, Ted Supalla and Elissa Newport demonstrated that very finely detailed differences in movement could distinguish some nouns from related verbs: it had been thought earlier (for example, by Stokoe) that there was a single sign for ‘sit’ and ‘chair’—but Supalla and Newport showed the signs for these were subtly but crucially separate. 94

  94. Supalla and Newport, 1978.

  The most systematic research on the use of time in Sign has been done by Scott Liddell and Robert Johnson and their colleagues at Gallaudet. Liddell and Johnson see signing not as a succession of instantaneous ‘frozen’ configurations in space, but as continually and richly modulated in time, with a dynamism of ‘movements’ and ‘holds’ analogous to that of music or speech. They have demonstrated many types of sequentiality in ASL signing—sequences of handshapes, locations, non-manual signs, local movements, movements-and-holds—as well as internal (phonological) segmentation within signs. The simultaneous model of structure is not able to represent such sequences, and may indeed prevent their being seen. Thus it has been necessary to replace the older static notions and descriptions with new, and often very elaborate, dynamic notations, which have some resemblances to the notations for dance and music. 95

  95. See Liddell and Johnson, 1989, and Liddell and Johnson, 1986.

  No one has watched these new developments with more interest than Stokoe himself, and he has focused specifically on the powers of ‘language in four dimensions’: 96

  96. Stokoe, 1979.

  Speech has only one dimension—its extension in time; writing has two dimensions; models have three; but only signed languages have at their disposal four dimensions—the three spatial dimensions accessible to a signer’s body, as well as the dimension of time. And Sign fully exploits the syntactic possibilities in its four-dimensional channel of expression.

  The effect of this, Stokoe feels—and here he is supported by the intuitions of Sign artists, playwrights, and actors—is that signed language is not merely prose—like and narrative in structure, but essentially ‘cinematic’ too:

  In a signed language…narrative is no longer linear and prosaic. Instead, the essence of sign language is to cut from a normal view to a close-up to a distant shot to a close-up again, and so on, even including flashback and flash-forward scenes, exactly as a movie editor works…Not only is signing itself arranged more like edited film than like written narration, but also each signer is placed very much as a camera: the field of vision and angle of view are directed but variable. Not only the signer signing but also the signer watching is aware at all times of the signer’s visual orientation to what is being signed about.

  Thus, in this third decade of research, Sign is seen as fully comparable to speech (in terms of its phonology, its temporal aspects, its streams and sequences), but with unique, additional powers of a spatial and cinematic sort—at once a most complex and yet transparent expression and transformation of thought. 97

  97. Again, Stokoe describes some of this complexity:

  When three or four signers are standing in a natural arrangement for sign conversation…the space transforms are by no means 180-degree rotations of the three-dimensional visual world but involve orientations that non-signers seldom if ever understand. When all the transforms of this and other kinds are made between the signer’s visual three-dimensional field and that of each watcher, the signer has transmitted the content of his or her world of thought to the watcher. If all the trajectories of all the sign actions—direction and direction-change of all upper arms, forearm, wrist, hand and finger movement, all the nuances of all the eye and face and head action—could be described, we would have a description of the phenomena into which thought is transformed by a sign language…These superimpositions of semantics onto the space-
time manifold need to be separated out if we are to understand how language and thought and the body interact.

  The cracking of this enormously complex, four-dimensional structure may need the most formidable hardware, as well as an insight approaching genius. 98

  98. ‘We currently analyze three dimensional movement using a modified OpEye system, a monitoring apparatus permitting rapid high-resolution digitalization of hand and arm movements…Opto-electronic cameras track the positions of light-emitting diodes attached to the hands and arms and provide a digital output directly to a computer, which calculates three-dimensional trajectories’ (Poizner, Klima, and Bellugi, 1987, p. 27). See fig. 2.

  And yet it can also be cracked, effortlessly, unconsciously, by a three-year-old signer. 99

  99. Though unconscious, learning language is a prodigious task—but despite the differences in modality, the acquisition of ASL by deaf children bears remarkable similarities to the acquisition of spoken language by a hearing child. Specifically, the acquisition of grammar seems identical, and this occurs relatively suddenly, as a reorganization, a discontinuity in thought and development, as the child moves from gesture to language, from prelinguistic pointing or gesture to a fully grammaticized linguistic system: this occurs at the same age (roughly twenty-one to twenty-four months) and in the same way, whether the child is speaking or signing.

  What goes on in the mind and brain of a three-year-old signer, or any signer, that makes him such a genius at Sign, makes him able to use space, to ‘linguisticize’ space, in this astonishing way? What sort of hardware does he have in his head? One would not think, from the ‘normal’ experience of speech and speaking, or from the neurologist’s understanding of speech and speaking, that such spatial virtuosity could occur. It may indeed not be possible for the ‘normal’ brain—i.e., the brain of someone who has not been exposed early to Sign. 100What then is the neurological basis of Sign?

  100. It has been shown by Elissa Newport and Ted Supalla (see Rymer, 1988) that late learners of Sign—which means anyone who learns Sign after the age of five—though competent enough, never master its full subtleties and intricacies, are not able to ‘see’ some of its grammatical complexities. It is as if the development of special linguistic-spatial ability, of a special left hemisphere function, is only fully possible in the first years of life. This is also true for speech. It is true for language in general. If Sign is not acquired in the first five years of life, but is acquired later, it never has the fluency and grammatical correctness of native Sign: some essential grammatical aptitude has been lost. Conversely, if a young child is exposed to less-than-perfect Sign (because the parents, for example, only learned Sign late), the child will nonetheless develop grammatically correct Sign—another piece of evidence for an innate grammatical aptitude in childhood.

  LOOK ALL OVER

  LOOK ACROSS A SERIES

  LOOK AT INTERNAL FEATURES

  Figure 2. Computer-generated images showing three different grammatical inflections of the sign LOOK. The beauty of a spatial grammar, with its complex three-dimensional trajectories, is well brought out by this technique (see footnote 98, p. 91). (Reprinted by permission from Ursula Bellugi. The Salk Institute for Biological Studies, La Jolla, California.)

  Having spent the 1970’s exploring the structure of sign languages, Ursula Bellugi and her colleagues are now examining its neural substrates. This involves, among other methods, the classical method of neurology, which is to analyze the effects produced by various lesions of the brain—the effect, here, on sign language and on spatial processing generally, as these may be observed in deaf signers with strokes or other lesions.

  It has been thought for a century or more (since Hughlings-Jackson’s formulations in the 1870’s) that the left hemisphere of the brain is specialized for analytic tasks, above all for the lexical and grammatical analysis that makes the understanding of spoken language possible. The right hemisphere has been seen as complementary in function, dealing in wholes rather than parts, with synchronous perceptions rather than sequential analyses, and, above all, with the visual and spatial world. Sign languages clearly cut across these neat boundaries—for on the one hand, they have lexical and grammatical structure, but on the other, this structure is synchronous and spatial. Thus it was quite uncertain even a decade ago, given these peculiarities, whether sign language would be represented in the brain unilaterally (like speech) or bilaterally; which side, if unilateral, it would be represented on; whether, in the event of a sign aphasia, syntax might be disturbed independently of lexicon; and, most intriguingly, given the interweaving of grammatical and spatial relations in Sign, whether spatial processing, overall spatial sense, might have a different (and conceivably stronger) neural basis in deaf signers.

  These were some of the questions faced by Bellugi and her colleagues when they launched their research. 101

  101. The prescient Hughlings-Jackson wrote a century ago: ‘No doubt, by disease of some part of the brain the deaf-mute might lose his natural system of signs which are of some speech-value to him,’ and thought this would have to affect the left hemisphere.

  At the time, actual reports on the effects of strokes and other brain lesions on signing were rare, unclear, and often inadequately studied—in part because there was little differentiation between finger spelling and Sign. Indeed, Bellugi’s first and central finding was that the left hemisphere of the brain is essential for Sign, as it is for speech, that Sign uses some of the same neural pathways as are needed for the processing of grammatical speech—but in addition, some pathways normally associated with visual processing.

  That signing uses the left hemisphere predominantly has also been shown by Helen Neville, who has demonstrated that Sign is ‘read’ more rapidly and accurately by signers when it is presented in the right visual field (information from each side of the visual field is always processed in the opposite hemisphere). This may also be shown, in the most dramatic way, by observing the effects of lesions (from strokes, etc.) in certain areas of the left hemisphere. Such lesions may cause an aphasia for Sign—a breakdown in the understanding or use of Sign analogous to the aphasias of speech. Such sign aphasias can affect either the lexicon or the grammar (including the spatially organized syntax) of Sign differentially, as well as impairing the general power to ‘propositionize’ which Hughlings-Jackson saw as central to language. 102

  102. The kinship of speech aphasia and sign aphasia is illustrated in a recent case reported by Damasio et al. in which a Wada test (an injection of sodium amytal into the left carotid artery—to determine whether or not the left hemisphere was dominant) given to a young, hearing Sign interpreter with epilepsy brought about a temporary aphasia of both speech and Sign. Her ability to speak English started to recover after four minutes; the sign aphasia lasted a minute or so longer. Serial PET scans were done throughout the procedure and showed that roughly similar portions of the left hemisphere were involved in speech and signing, although the latter seemed to require larger brain areas, in particular the left parietal lobe, as well (Damasio et al., 1986).

  But aphasic signers are not impaired in other, nonlinguistic visual-spatial abilities. (Gesture, for example—the non-grammatical expressive movements we all make [shrugging the shoulders, waving good-bye, brandishing a fist, etc.]—is preserved in aphasia, even though Sign is lost, emphasizing the absolute distinction between the two. Patients with aphasia, indeed, can be taught to use ‘Amerindian Gestural Code,’ but cannot use Sign, any more than they can use speech.) 103

  103. There is considerable evidence that signing may be useful with some autistic children who are unable or unwilling to speak; Sign may allow such children a degree of communication which had seemed unimaginable (Bonvilhan and Nelson, 1976). This may be in part, so Rapin feels, because some autistic children may have specific neurological difficulties in the auditory sphere, but much greater intactness in the visual sphere.

  Though Sign cannot be of help with the aphasic,
it may help the retarded and senile with very limited or eroded capacities for spoken language. This may be due in part to the graphic and iconic expressiveness of Sign, and in part to the relative motor simplicity of its movements, compared with the extreme complexity and vulnerability of the mechanism for speech.

  Signers with right hemisphere strokes, in contrast, may have severe spatial disorganization, an inability to appreciate perspective, and sometimes neglect of the left side of space—but are not aphasic and retain perfect signing ability despite their severe visual-spatial deficits. Thus signers show the same cerebral lateralization as speakers, even though their language is entirely visuo-spatial in nature (and as such might be expected to be processed in the right hemisphere).

  This finding, when one considers it, is both startling and obvious and leads to two conclusions. It confirms, at a neurological level, that Sign is a language and is treated as such by the brain, even though it is visual rather than auditory, and spatially rather than sequentially organized. And as a language, it is processed by the left hemisphere of the brain, which is biologically specialized for just this function.

  The fact that Sign is based here in the left hemisphere, despite its spatial organization, suggests that there is a representation of ‘linguistic’ space in the brain completely different from that of ordinary, ‘topographic’ space. Bellugi provides a remarkable and startling confirmation of this. One of her subjects, Brenda 1., with a massive right hemisphere lesion, showed a profound neglect of the left side of space, so that when she described her room, she put everything, higgledy-piggledy, on the right side, leaving the left side entirely vacant. The left side of space—of topographic space—no longer existed for her (fig. 3a-b). But in the actual signing, she established spatial loci, and signed freely, throughout the signing space, including the left side (fig. 3c). Thus her perceptual space, her topographic space, a right hemisphere function, was profoundly defective; but her linguistic space, her syntax space, a left hemisphere function, was completely intact.