DeafBlind Americans developed a language that doesn’t involve sight or sound

We may earn a commission from links on this page.

Click here for a full transcript of the video above. 

When I first met neuroscientist Clifton Langdon and his research assistant Oscar Serna on a hot day in July at Gallaudet University in Washington, DC, I quickly became acutely aware that despite the fact that we all understood English, we did not speak the same language.

Upon meeting, each took my hand enthusiastically to greet me. Then Langdon, who is Deaf, began signing fervently in American Sign Language (ASL). Serna, who is DeafBlind—meaning he can’t see or hear—then placed his left hand loosely on top of Langdon’s right and followed Langdon’s hand movements as the latter signed. (The capitalization denotes identity, rather than ability.)

When Serna had something to add, he responded in pro-tactile ASL: he tapped and patted his interpreter (sitting next to him) anywhere from her shoulders to her knees. She translated those taps and pats by signing them in ASL, the most prominent form of sign language in the US with about 2 million users. A second interpreter translated that ASL into spoken English for me.

Despite the friendly nature of the conversation, it was anxiety-inducing. Being in the presence of any foreign language can be overwhelming: not only are you unable to understand what’s being said, but you can’t voice your own opinions or needs. With spoken word, you can get a sense of what’s going on based on your knowledge of similar languages and conversational tone—and, of course, you can interject vocally if you really need to (even if you aren’t actually speaking the language of those around you). And when communicating with someone who can see you, you can wave or gesture to indicate you don’t follow—no matter what verbal language they speak.

But in front people speaking pro-tactile ASL, it’s difficult to even figure out how to make your presence known.

Pro-tactile ASL is not widely-used or well-known, in large part because it’s so new. “Prior to 2007, DeafBlind people rarely communicated directly with one another,” said Terra Edwards, an anthropological linguist at Gallaudet, in a 2011 interview with the Wenner-Gren Foundation.

In America, there are between 45,000 and 50,000 DeafBlind people, according to Gallaudet. Since the 1980s, in order to interact, DeafBlind people relied on methods like Braille, fingerspelling—the way Helen Keller communicated—and physically tracking ASL, the way Serna did with Langdon.

From 2006 to 2011, Edwards spent significant time with the large DeafBlind community in Seattle—her research there was eventually published in a 2014 dissertation at the University of California, Berkeley. Edwards found that though the Seattle DeafBlind community had a network of available interpreters, communicating through touch was seen as an extra accommodation for those who had completely lost their sight. For years, people who were DeafBlind with limited vision clung to ASL because they wanted to be perceived by as others as being able to see, to limit any potential discrimination.

In her dissertation Edwards writes that although there were resources for the DeafBlind community in Seattle, most relied on interpreters who needed to be booked in advance. Impromptu meetings were out of the question. Between 2006 and 2008, members of the community were constrained by the financial and logistical constraints of hiring interpreters, and simply began using using pro-tactile ASL on their own. As they did, the language grew organically, and the “pro-tactile” movement took hold. It “called for the cultivation of tactile dispositions regardless of sensory capacity,” Edwards said in the 2011 interview.

Pro-tactile ASL borrows bits and pieces from ASL, adapting them to be useful for people who can’t see. Rather than having the using their own hands as a reference for communication, people who convey information with pro-tactile ASL use the perceiver’s hands and body. The speaker will touch the perceiver’s body and mover his or her hands; in doing so, the speaker takes advantage of the perceiver’s proprioception, or sense of where his or her limbs are. “When we’re talking about a particular shape, instead of showing the shape in space, you’d show [by moving] the perceiver’s arm,” said Serna.

Take, for example, the sentence “I climbed the tree.” A person signing this sentence in ASL would move their own hands and arm. Instead, Serna communicated it to me by by taking my left forearm and using his right pointer and index fingers to “walk” from my elbow to my palm. My forearm represented the tree, and his fingers represented himself making his way up.

In other contexts, the forearm can be used to communicate other ideas—Serna showed me the sign for “lollipop” in which my forearm was the stick and he cupped his hand over my fist to represent the candy at the top. It all depends on the situation. After all, language is contextual—whether spoken, signed, or otherwise communicated. For example, when we process spoken words, says Langdon, “we’re looking at word order, clusters of words, and all of those different components are happening at the same time.” Somehow, we put together all the words, syntax, and semantics together—and then have to throw in nonverbal cues, too—to make sense of them.

We intuitively use all sorts of non-verbal signals like tone, nodding, or crossing our arms that give two people communicating with each other a sense of the one another’s emotion or understanding of a situation. “There’s something called ‘backchanneling,” says Langdon. It’s what one person does to indicate how they’re taking in the information being given to them, and it’s important for both parties in a conversation. If you’re speaking, you want to know that your audience is engaged and following along; if you’re listening, you want to be able to show that you agree, disagree, or don’t understand.

In the hearing world, we get this through verbal validation—the “uh-huhs” and “mmhs” peppered in here and there—or via facial cues like a smile or furrowed eyebrows. In ASL, backchanneling is entirely visual, involving different facial expressions. And in pro-tactile ASL, it happens through touch. For affirmation, Langdon taps Serna’s leg; laughing is gently scratching his leg or tapping his neck.

When all of this happens simultaneously, the flurry of motion works together like gears in a machine, a diverse array of small pieces combining into a unified communique. For those who are fluent, pro-tactile ASL can communicate an infinite number of ideas, with feedback connecting the speaker and perceiver—or group of perceivers, as the case may be.

Despite all its intricacies, Serna says protactile ASL isn’t any harder to learn than other languages. “It’s the same concept as learning any other language,” he says. “It’s just connecting with the group of people whose language you want to learn.”

In this case, the connection is physical. There’s something strikingly powerful about the language that comes from its required intimacy.

It’s not very often in the hearing and seeing world you’d take the arm or pat the back or thigh of someone you’ve just met to tell them a story. Usually, touch is reserved for those we’re closest too. But when pro-tactile users have to come closer together to speak, they’re coming into each other’s personal space. Immediately, respect and trust is required.

Pro-tactile ASL has created opportunities for richer communication for the DeafBlind community, and it continues to grow and move towards the mainstream. Langdon works with a group called called Tactile Communications, which studies and advocates for pro-tactile ASL; this summer, the group presented their work at the White House.

“[ASL] felt like old dial-up internet connection,” Serna said. “I got limited information and it didn’t come very fast. But with pro-tactile ASL, it feels like the world has opened up—I get the same information and the same speed as broadband internet.