The linguistics of signifying time: The human gesture as clock
- March 17, 2016
- A new scientific study documenting the linguistic practices of the Northwestern Amazonian peoples uncovers an unusual method of communicating the human concept of time.
A new scientific study documenting the linguistic practices of the Northwestern Amazonian peoples uncovers an unusual method of communicating the human concept of time. The study, “Modally hybrid grammar? Celestial pointing for time-of-day reference in Nheengatú,” by Simeon Floyd of the Max Planck Institute for Psycholinguistics in the Netherlands, was published in the March, 2016 issue of the scholarly journal Language.
The article examines how the Nheengatú language includes both auditory and visual components to express the time of day, even though it does not have any numerical or written system for telling time. Speakers of Nheengatú talk about time of day by pointing at where the sun would be in the sky at that particular time. For speakers of Nheengatú, this is the same as saying things like “nine o’clock” in English. This practice is notable because many linguists have assumed that users of auditory languages would not also develop visual language like that seen in sign languages, but this phenomenon shows that this is not necessarily the case.
When humans conceive of grammar we might think of categories like nouns, verbs, adjectives and adverbs that people communicate by vocalizing. Research with speakers of Nheengatú reveals that this is not always the case, however, and that in some languages it is possible to communicate some of these concepts, by combining movements of the hands and body with speech in systematic ways. In this case visual elements play a role comparable to that usually played by spoken adverbs, adding information about time to the verbs they occur with.
These Nheengatú physical expressions are the type of visual language we expect to see in sign languages, but for spoken languages it is often assumed that all of the words should be audible, not visual, and that the gestures that come along with speech only give extra, peripheral meanings, and not the main information about the topic of talk. These practices seen in small communities in the Amazon have the potential to change how scientists think about the modalities in which language is expressed, because they show that humans don’t necessarily have to choose between speaking and signing and are capable of doing both simultaneously.
Nheengatú time reference is just one of the types of combinations of spoken and visual language that some linguists are beginning to suspect may be more common than is currently known; since historically many languages have been studied only based on written words and audio recordings, future scientific studies of video recordings may find new and unexpected types combinations of spoken and visual language that may have been previously invisible.
Story Source: Science Daily
Research indicates that human language began with gestures.
Hand gestures improve learning in both signers, speakers
- Date: August 19, 2014 Source: University of Chicago
- Spontaneous gesture can help children learn, whether they use a spoken language or sign language, according to a new report. “Children who can hear use gesture along with speech to communicate as they acquire spoken language,” a researcher said. “Those gesture-plus-word combinations precede and predict the acquisition of word combinations that convey the same notions. The findings make it clear that children have an understanding of these notions before they are able to express them in speech.”
Goldin-Meadow’s new study examines how gesturing contributes to language learning in hearing and in deaf children. She concludes that gesture is a flexible way of communicating, one that can work with language to communicate or, if necessary, can itself become language. The article is published online by Philosophical Transactions of the Royal Society B and will appear in the Sept. 19 print issue of the journal, which is a theme issue on “Language as a Multimodal Phenomenon.”
“Children who can hear use gesture along with speech to communicate as they acquire spoken language,” Goldin-Meadow said. “Those gesture-plus-word combinations precede and predict the acquisition of word combinations that convey the same notions. The findings make it clear that children have an understanding of these notions before they are able to express them in speech.”
In addition to children who learned spoken languages, Goldin-Meadow studied children who learned sign language from their parents. She found that they too use gestures as they use American Sign Language. These gestures predict learning, just like the gestures that accompany speech.
Finally, Goldin-Meadow looked at deaf children whose hearing losses prevented them from learning spoken language, and whose hearing parents had not presented them with conventional sign language. These children use homemade gesture systems, called homesign, to communicate. Homesign shares properties in common with natural languages but is not a full-blown language, perhaps because the children lack “a community of communication partners,” Goldin-Meadow writes. Nevertheless, homesign can be the “first step toward an established sign language.” In Nicaragua, individual gesture systems blossomed into a more complex, shared system when homesigners were brought together for the first time.
These findings provide insight into gesture’s contribution to learning. Gesture plays a role in learning for signers even though it is in the same modality as sign. As a result, gesture cannot aid learners simply by providing a second modality. Rather, gesture adds imagery to the categorical distinctions that form the core of both spoken and sign languages.
Goldin-Meadow concludes that gesture can be the basis for a self-made language, assuming linguistic forms and functions when other vehicles are not available. But when a conventional spoken or sign language is present, gesture works along with language, helping to promote learning.