Can AI help us understand the evolution of language?

A new study used machine learning to show how American Sign Language is shaped by the need for easier ways to communicate.

Linguists find it challenging to agree on how and why languages evolve. But with the help of artificial intelligence (AI) – they now might be able to.

A new study on American Sign Language (ASL) has attempted to structurally model how language can be shaped by people who use it to make communication easier.

“Sign languages offer a unique opportunity to ask how the body shapes the structure of language, because they are produced using different parts of the human body than spoken languages and lend themselves to a different set of perceptual and motoric capacities,” the study’s authors wrote.

Deaf studies scholar Naomi Caselli from Boston University (BU) and a team of researchers found that ASL signs that are challenging to understand – those which are rare or have uncommon handshapes – are made closer to the signer’s face, where people often look during sign perception.

By contrast, common handshapes which are much more routine, are made further away from the face, in the perceiver’s peripheral vision.

To come to those conclusions, Caselli along with researchers from Syracuse University and Rochester Institute of Technology looked at the evolution of ASL with the assistance of an AI tool that analysed videos of over 2,500 signs from ASL-LEX, the world’s largest interactive ASL database.

According to Caselli the findings, which were published in the journal Cognition, suggest that ASL has evolved to be easier for people to recognise signs.

“Every time we use a word, it changes just a little bit,” said Caselli, a Wheelock College of Education & Human Development assistant professor at BU. “Over long periods of time, words with uncommon handshapes have evolved to be produced closer to the face and, therefore, are easier for the perceiver to see and recognise.”

Although studying the evolution of language is complex, Caselli said, “you can make predictions about how languages might change over time, and test those predictions with a current snapshot of the language.”

Caselli added that the team began to use the AI algorithm to estimate the position of the signer’s body and limbs.

“We feed the video into a machine learning algorithm that uses computer vision to figure out where key points on the body are. We can then figure out where the hands are relative to the face in each sign.”

The researchers then matched that with data from ASL-LEX about how often the signs and handshapes are used.

For example, they found that many signs that use common handshapes, such as the sign for children, are produced further from the face than signs that use rarer handshapes, like the one for light.

The dataset was created with help from the Hariri Institute’s Software & Application Innovation Lab, a project that is part of a growing body of knowledge connecting computing and sign language at BU.

“The team behind these projects is dynamic, with signing researchers working in collaboration with computer vision scientists,” said Lauren Berger, a Deaf scientist and postdoctoral fellow at BU who works on computational approaches to sign language research.

Caselli believes that understanding how sign languages work can help improve Deaf education, and she hopes the latest findings can bring attention to the diversity of human languages and the remarkable abilities of the human mind.

“If all we study is spoken languages, it is hard to tease apart the things that are about language in general from the things that are particular to the auditory-oral modality. Sign languages offer a neat opportunity to learn about how all languages work,” Caselli said.

“Now with AI, we can manipulate large quantities of sign language videos and actually test these questions empirically.”

Route 6