Deaf Language and Biology and Culture
ON JANUARY 10, 2022 BY BRADIN DEAF GAIN, DEAFNESS
It’s time for another chapter from Deaf Gain. This one is Laura-Ann Petitto's “Three Revolutions: Language, Culture, and Biology.” The author had the easiest-to-read style in the book. Most of the other chapters read like submissions to academic journals. This one read like an anecdote. She gives lots of lectures on the wonders of signed languages and the chapter sprung from a recurring epilogue. After the talks hearing peeps would often opine that “surely speech is better.” To which she answers, “Nope, and don’t call me Shirley.” I may be paraphrasing. For this chapter, Dr. Petitto brought science to an opinion fight. She cited evidence from academic journals and from neuroimaging studies. Her first revolutions focuses on language. She calls back to Noam Chomsky’s 1957 paper “Syntactic Structures” in which he theorizes how there’s a part of the brain that is responsible for how we understand and use grammar. It’s very creatively called, Universal Grammar (UG). There’s more to it than that but I don’t quite understand. Nor am I going to tax my fragile little mind trying, since it’s largely been debunked. The only reason I bring it up, or rather Dr. Peitto brings it up, is because of William Stokoe. Before linguists cried “fiddlesticks!” to Chomsky’s OG UG theory and pointed to studies to back up their rather comical cry, he was THE authority on linguistics. He’d been called the “Father of Linguistics.” He was kind of a big deal. People knew him. So if someone wanted to go about convincing people that another form of communication, say American Sign Language, was a language and not a random assortment of gestures, they would do well to call back to Chomsky’s “Syntactic Structures”. In 1960, a Gallaudet professor and Chaucer scholar, William Stokoe, wrote his own paper; “Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf”. Doesn’t quite roll off the tongue (or hand). But it didn’t have to be pithy or punchy, it just had to prove. And prove it did. It was the first blow to the myth that ASL was bereft of “any formal rules, underlying grammatical principles and regularities, and, crucially, sublexical organization that lie at the heart of human spoken languages.” (67) Being tied to the accepted Chomsky’s UG made it awfully difficult to discount ASL as a real language. But just because something is difficult doesn’t mean it ain’t done. Despite the strength of the evidence, it took until the 1990s for it to take hold. Perhaps it’s because Chomsky’s work was being deflated during those decades. But as the 90s gave way to Y2K, the second revolution, biology, breathed new life into Stokoe’s assertion. In 2000, Dr. Petitto was part of a study in which neuroimaging confirmed that “signed and spoken languages are biologically equivalent” (italics in the original on p. 69). In reading some of the study she’s quoting, I an doubly-awed by her writing style in this chapter. My eyes were crossing as I read the abstract. So I before I go a littler deeper into the study, I wish to issue a disclaimer, in homage to Bones, “I’m a librarian not a neuroscientist!” Briefly, then, let me pull out some of the science-ese detailed in the study. The brain does the language processing thing in the left hemisphere. To narrow it down, the goings-on happen in what’s called the planum temporale. We can zoom in even closer to what’s known as Wernicke’s Area. What the study focused on was the regional cerebral blood flow to this area. More blood flowing yonder means the area is activated. Before this study, it was assumed that in order for the language to get the blood flowing said language had to be just that. Said. But the study found that some neuropathways are modifiable. They are polymodal. A mode in this instance isn’t like Beast Mode. Polymodal means the mode which activates the neuropathways that starts the beeps and boops of language processing. Spoken is one such mode. Signing is another. What was so revolutionary about the study was that it showed the brain processes spoken languages the same as it does signed languages. The nuts and bolts of language processing focus on the information being delivered not the delivery mode. The study says a lot more in a lot more educated a mode. But methinks that’s enough for us. Let’s wrap the chapter, and the study, up. The study was only possible because of the third revolution of the article; culture. With the publication of Stokoe’s paper, Deaf people felt supported in their use of their language. And so without having to doubt themselves, they were able to create a strong Deaf Culture in the 60s and 70s. Belonging to a culture is invaluable to feeling safe to be yourself. That safe space allowed for Deaf parents to raise Deaf babies using ASL. Which in turn helped the study reveal that signing babies babble with their hands! (69) And as the baby starts to learn ASL, their “visual processing and visual attention” changes their brain for the better by leading to “increased early vocabulary, language, and literacy mastery in both ASL and, most fascinatingly, in English.” (71). This study, and others, revealed that “speech and language have now been biologically decoupled.” (73) I find neuroscience fascinating. It’s amazing how smart people can make associations between external stimuli and internal mental processing. If I were smarter and if I weren’t as squeamish as water is wet, I think being a neurobiologist would be gnarly. But Alas! I guess I’ll just have to settle for reading about such stuff. And, of course writing about it to share my newfound knowledge. Lucky you!
To view the post:
Comments