Beyond Reasonable: a memoir of autism, adoption, and Facilitated Communication

Ralph Savarese’s memoir “Reasonable People” recounts a momentous project undertaken by two people who are manifestly much more than merely reasonable. In the course of the late 1990s, Savarese and his wife decided to adopt a profoundly autistic child, and they succeeded in educating him to the point of communicating in complex sentences, reflecting thoughtfully on his childhood and his autistic identity, composing poetry, and thriving in regular ed classes.

There’s just one issue. The boy composes these complex sentences, thoughtful reflections, poetry, and regular ed assignments through Facilitated Communication.

Savarese is aware of FC’s checkered history. So he trots out the usual pro-FC publications: Cardinal, Hanson & Wakeham (1996); Emerson, Grayson, & Griffiths (2001); Sheehan & Maturozzi (1996); and Vasquez (1994). These studies, however, have serious design flaws and have been outdone by more rigorous studies and systematic reviews–most recently Saloviita et al (2014); Hemsley et al (2018); and Schlosser et al (2019).

As far as his anecdotal experiences go, though, Savarese just knows that FC works. When he facilitates his son (who goes by DJ in the book and now, as in the eponymous 2017 film, by Deej), Savarese knows that the person directing the facilitated typing is not himself, but DJ. And, through the messages that he and his wife elicit from DJ through facilitated communication, Saverese also knows what’s keeping DJ from typing independently. The psychological baggage DJ bears from his early years in and out of foster care have led him to associate independence with abandonment. While he is able to physically distance himself from his parents and helpers and, say, go rollerskating with his birth sister, the prospect of typing messages without one of them sitting next to him and supporting his hand, elbow, or shirt cuff, apparently, elicits deep-seated memories of desertion.

Many accounts of purportedly successful FC fail to explain how the facilitated individuals acquired the language and literacy skills needed to type out sophisticated messages. To his credit, Savarese attempts to flesh this out. But his claims about DJ’s road to language and literacy fly in the face of what we know about language and literacy acquisition. Savarese claims that it’s only after DJ learns to read and spell that he’s able to understand spoken language. But where alphabetic writing systems are concerned, the causality flows in the opposite direction. Phonemic awareness of speech sounds is a generally a prerequisite for decoding written language, and comprehension of spoken words is generally a prerequisite for comprehending written language. True, many autistic kids are more attentive to printed words than to speech. And between six and ten percent of children with autism exhibit hyperlexia: a precocious ability to decode written words but without understanding their meanings. But it’s not the case that print-attentive, hyperlexic children (with or without autism) understand spoken language only after they’ve learn to read and spell.

According to Savarese, however, DJ learned to read, not through phonics, but through sight-word recognition–a methodology that, the research shows, doesn’t get you very far in practice. To see why, imagine what it takes to learn to read words in an unfamiliar alphabet through memorization rather than by sounding them out into familiar spoken words. Here are some Armenian flashcards:

Imagine what it would take to learn these strings of unfamiliar letter shapes without any knowledge of which speech sounds (phonemes) they represent or which (already familiar) spoken words they correspond to. Imagine what it would take to simply memorize these strings of shapes and the pictures that attempt to capture their meanings. 

Next, imagine how you could possibly apply these memorized pattern mappings to make sense of a spoken language of which you had no prior understanding. Here’s the best example of child-directed Armenian I was able to find on Youtube:

While Savarese describes a painstaking, step-by-step process towards literacy and spoken language comprehension, it is one that this Armenian Thought Experiment demonstrates is well nigh impossible–regardless of diagnosis.

Children, regardless of diagnosis, acquire neither their foundational vocabulary, nor their foundational understanding of spoken language, through the memorization of letter strings and associated pictures. That might be how AI “learns” language, but, as I and others have argued, AI doesn’t actually understand the language it processes. True understanding requires that we map language to the real world.

Young humans do this through what’s called “Joint Attention.” To learn an unfamiliar word, first the child attends to whoever is speaking (or signing or pointing out a written word). Then, often by following that person’s eye gaze or pointing gesture, the child switches their attention to whatever real-word object or event that person appears to be communicating about. (Most children do this constantly and automatically; autistic children often need to have their attention directed more explicitly.)

Frequency of Joint Attention, regardless of diagnosis, is correlated with vocabulary acquisition. And this particular correlation is causal: frequency of Joint Attention at 9 months predicts vocabulary size at 18 months. The causality applies to children in general; it applies to children with autism. In the case of autism, reduced Joint Attention is part and parcel of the reduced social connectedness that is autism’s core symptom. Indeed, reduced Joint Attention is correlated with autism severity. 

Taken together, the connections between between Joint Attention and language acquisition and between Joint Attention and autism severity explain why children with the most profound autism often remain not just non-speaking, but non-verbal: unable to express themselves (in independently authored messages) in any linguistic medium.

Like most proponents of FC, Savarese disputes the notion that reduced social connectedness is fundamental to autism–despite the fact that this has long been both the scientific consensus and the basis for the diagnostic criteria for autism used around the world. Instead, like most FC-proponents (most of them not experts in autism), Savarese advocates a redefinition autism as a movement disorder–one that interferes with the ability of autistic individuals to make their bodies do what they want them to do. Finally, like many FC proponents, Savarase claims that the work of reputable researchers–he cites neurologist Antonio Damasio–support this redefinition of autism. But all such work has done is support the general consensus that many individuals with autism have difficulty with certain aspects of motor control–like gait. We all know that motor control issues are widespread in autism, but it’s the difficulties with social communication that are universal. And, while there’s plenty of evidence for motor clumsiness in autism, there’s no evidence for the kind of mind-body disconnect that FC proponents depend on to justify FC.

Nor has anyone been able to explain how FCed individuals acquire the sophisticated academic and worldly knowledge that their FCed communications often display. How, in DJ’s case, had he as a fourth-grader acquired knowledge of the dynamics of social class in the Titanic and mastery over the algorithms of arithmetic? Even Savarase admits bafflement:

How to account for a neurologically disabled fourth-grader with both class and existential consciousness? How did DJ know these things? How, with almost no training in math, could he do complicated addition, subtraction, multiplication, and division problems in his head? How? How? How? Later, when I’d start investigating this phenomenon, I’d discover that nobody really had a theory beyond a notion of the child with autism somehow storing information (without exactly understanding it) and accessing it once he or she had come to communicative life.

(Savarese, p. 139)

Savarese proposes that teaching DJ how to read may have removed an emotional block that contributed to his autism, and that, once removed, opened up pathways to speedy academic progress.

Nowhere in Savarese’s account do we find any references to Clever Hans, to the Ideomotor Illusion, or to a much more plausible, empirically documented, explanation for what is going on here. And so we have no reason to dismiss the very real possibility that Savarese, his wife, and DJ’s other facilitators, unwitting victims of the Ideomotor Illusion, are the ones performing complex arithmetic, discussing the Titanic’s class dynamics, and composing the poems and the reflections about autism. And that the vulnerable person they’ve adopted is merely complying with their ever more subtle physical cues–the hand propping, the elbow pressure, the cuff-pinching–trapped inside a network of “reasonable people” who just know that they are doing what’s right for him.

Winograd schemas as a window into reading comprehension

In one of my earlier posts on the limits of Artificial Intelligence, I reported on the difficulty AI has figuring out what “it” means. As measured by so-called Winograd schemas like the one below, even the most sophisticated AI performs at levels not much better than chance:

SENTENCE 1: “The city council refused the demonstrators a permit because they feared violence.”

QUESTION: Who feared violence?

A. The city council B. The demonstrators

SENTENCE 2: “The city council refused the demonstrators a permit because they advocated violence.”

QUESTION: Who advocated violence?

A. The city council B. The demonstrators

The problem is that even state of the art AI lacks worldly knowledge–including background knowledge about city councils and demonstrators and their typical concerns and plausible goals. Who might fear violence and feel a responsibility to prevent it? Who might advocate violence?

Of course, even the occasional human has been known to have deficits in background knowledge, and those deficits have increasingly been implicated in problems with reading comprehension. In general, if you lack the knowledge that a text takes for granted, you’ll struggle to make sense of anything the text asserts that’s based on that knowledge.

For example, if you don’t know about the functions of antimicrobial agents, or what bacterial resistance and immunosuppression entail, you’ll struggle to make sense of this paragraph (even if you happen to know words like “pathogenic” and “concomitant”):

Continue reading

What Grammarly overlooks

In my last post, I discussed how Grammarly’s feedback mostly targets individual words and phrases–with the exception of its knee-jerk rejection of passive voice. This means that Grammarly mostly overlooks sentence-level revision tools–i.e., the sorts of tools that Catherine and I address in Europe in the Modern World.

One of these tools is passive voice. Compare:

Despite these harsh conditions, the settlement with the Nazis relieved most French men and women.


Despite these harsh conditions, most French men and women were relieved by the settlement with the Nazis.

In the second sentence “the settlement with the Nazis”, moved by passive voice to sentence-final position, receives more emphasis than it did in the original.

Choosing appropriate end-focus involves considerations of sentence meaning and communicative intent that are beyond today’s AI. The same goes for another key element of good writing: sentence cohesion.

Continue reading

Can AI help us with our writing?

I spent my last three posts discussing the failure of artificial intelligence, even state of the art AI, to understand what we say to it and what it says to us. But understanding, surely, isn’t everything. Real-world communications aside, how does AI fare re one of Catherine’s and my other core interests: linguistic tools for clear, coherent sentences?

Two elements of clear coherent sentences are word choice and punctuation, and some of the more common word choice and punctuation errors have long been handled by basic word processors. In the last decade, however, more sophisticated tools have emerged: tools like ProWritingAidGinger,  WhiteSmoke, and Grammarly. The latter, with over 10 million users, is hard to miss unless you completely avoid YouTube. If you survey Grammarly’s Internet reviews, you get the sense that it is the most sophisticated—and most expensive—of the new tools. It’s also the easiest one to get information about without buying a subscription. So Grammarly is my focus today.

Continue reading

Does AI know what “it” means? A look at Winograd schemas via Melanie Mitchell

I left off my last post with a claim that AI systems–even those that can answer questions and engage in conversations–are eluded by little words like “it.”

To flesh this out, I turn to a fantastic new book I just finished by Melanie Mitchell–Artificial Intelligence: A Guide for Thinking Humans. (Her book first crossed my radar back before my Twitter suspension, when “autism gadfly” Jonathan Mitchell–her brother and my Twitter friend–mentioned it in a tweet.)

In a chapter on the limits of natural language processing, Mitchell discusses miniature language-understanding tests called “Winograd schemas”, named for NLP researcher Terry Winograd. Here is the first of her examples:

Continue reading

Artificial Intelligence: does state of the art natural language processing really understand language?

As we saw in my last post on GPT-3, state of the art AI can write poetry, computer code, and passages that ruminate about creativity. It can also, with priming from a human being, emulate a particular style:

Before asking GPT-3 to generate new text, you can focus it on particular patterns it may have learned during its training, priming the system for certain tasks… If you prime it with dialogue, for instance, it will start chatting with you.

And if you it to write in the style of, say, Scott Barry Kaufman, it will do that.

But, as the New York Times notes, there are limits:

Continue reading

Artificial Intelligence–what is the state of the art?

In my previous posts on language processing via Artificial Intelligence, I discussed the inability of Twitter AI to distinguish joke-threats embedded within tweets from actual threats tweeted as such; the inability of apps like Siri and Alexa to respond appropriately to questions that haven’t been hard-coded into their systems; and the inability of email AI to distinguish emails from long-time correspondents sent to multiple people from actual spam.

But surely state of the art Artificial Intelligence is more accomplished?

Continue reading

Peer review at Scientific Reports, VIII

Reviewer II concludes (yes, there is, in fact, a conclusion to their critique) with the second half of my introduction:

This study, however, is based on faulty assumptions that undermine both its rationale and its conclusions: assumptions about test performance; about prerequisites for certain linguistic skills; about a disconnect between speech-based vs. typing-based communication skills; about what it would take for someone to point to letters based on cues from the person holding up the board; and about the study’s central premise: the agency of eye gaze. 

Reviewer II views this not as a thesis statement to be supported in subsequent paragraphs, but as an accusation to be dissected as is:

Continue reading

Peer review at Scientific Reports, VII

Reviewer II next turns to the concept of message passing. Here’s what I wrote:

The most direct way to validate authorship is through “message passing” tests involving prompts that the facilitator has not seen ahead of time. For contact-based facilitation, studies dating back to the 1990’s indicate that facilitators do in fact guide typing.2,3,4 Results show that the overwhelming majority of typed responses—when they occur—are based on prompts that the facilitator witnessed, and not on prompts that only the typist witnessed, strongly suggesting that the facilitator is the actual author. As for assistance via held-up letterboards, practitioners have yet to participate in rigorous, published, message passing experiments.5

2. Moore, S., Donovan, B., & Hudson, A. Facilitator-suggested conversational evaluation of facilitated communication. Journal of Autism and Developmental Disorders, 23, 541–552 (1993).
3. Wheeler, D. L., Jacobson, J. W., Paglieri, R. A., & Schwartz, A. A. An experimental assessment of facilitated communication. Mental Retardation, 31, 49–59 (1993).
4. Saloviita, T., Lepannen, M., & Ojalammi, U. Authorship in facilitated communication: An analysis of 11 cases. Augmentive and Alternative Communication, 3, 213-25 (2014).
5. Tostanoski, A., Lang, R., Raulston, T., Carnett, A., & Davis, T. Voices from the past: Comparing the rapid prompting method and facilitated communication. Developmental Rehabilitation, 17, 219–223 (2014).

But for Reviewer II, message passing (MP) means something completely different. Rather than being a means (indeed the best means) to test authorship, it’s a means for interpreting the autistic person’s code.

Continue reading

Peer review at Scientific Reports, VI

Having cleared up any confusion about non-stationary letterboards, Reviewer II next moves backwards to this part of my critique of Jaswal et al’s paper:

The authors’ third claim is that the “unfamiliar experimental setting”, combined with “elevated levels of anxiety common in autism”, may help explain difficulties with message passing tests. The anxiety defense is belied by (1) the care taken in many of the tests to make subjects as comfortable as possible12 (in particular, there is no mounting of eye-trackers to subjects’ heads), and (2) the unlikelihood that anxiety would lead—let alone enable—subjects to type out something that the facilitator saw and the typist didn’t. As for “unfamiliar test setting” issues, the authors cite Cardinal et al’s (1996) study in which message passing performance improved over multiple sessions.13 Error rates, however, remained high, and the study has been criticized for several design flaws.14 

13. Cardinal, D. N., Hanson, D. & Wakeham, J. Investigation of authorship in facilitated communication. Ment. Retard. 34, 231–242 (1996).
14. Mostert, Mark P. Facilitated Communication Since 1995: A Review of Published Studies. Journal of Autism and Developmental Disorders, Vol. 31, No. 3, 287-313 (2001).

Rather than explain why anxiety would enable subjects to type out something that the facilitator saw but that they themselves didn’t (e.g., by discussing ways in which anxiety might enhance autistic telepathy), Reviewer II invokes the Helsinki Accords:

Continue reading

Peer review at scientific reports, V

Having established the depth of their credentials in Brain Machine Interface technologies and pointed out that swallowing food and pointing to letters involve different skills, Reviewer II next (actually, it’s hard to keep track of what’s next, as they’ve gone through my FC critique backwards) turns to my argument that:

Subtle board movements are evident in the study’s supplemental videos, and this highlights a final problem. Even if you accept the authors’ justifications for eschewing message passing tests, and even if you somehow rule out the possibility of board-holding assistants providing cues, a non-stationary letterboard calls into question the study’s central premise: the purported agency of the subjects’ eye gaze. Were subjects intentionally looking at letters, or were letters shifting into their lines of sight? That is something that no eye-tracking equipment, no matter how sophisticated, can answer.

The first part of their response (I’ll go through it in order, rather than backwards) is this:

Continue reading

Peer review at Scientific Reports, Part IV

Moving backwards from the final paragraph of my critique of Jaswal et al’s paper on Facilitated Communication, Reviewer II turns to my statement that:

Jaswal et al, moreover, fail to explain why the subjects were able to deliver the often lengthy responses to the open-ended questions reported in this study when pointing to letters on a letterboard via hunt-and-peck-style typing, but not when articulating words through speech (a much less time-consuming process).

Reviewer II responds by explaining that training people to articulate speech sounds involves techniques that differ from training them to point at letters. This, they suggest, is news to linguists like myself–as well as to the behaviorists who work with autistic children:

Continue reading

Peer review at Scientific Reports, Part III

As we saw in Parts I and II, the first reviewer of my critique of a paper on Facilitated Communication dismissed it for two reasons. I was, apparently, harping in a petty way on message passing tests (simple tests in which you ask the person being facilitated something that the facilitator doesn’t know the answer to). And I was, apparently, suggesting that there are no language disorders in which people can type things they can’t say.

Reviewer II come up with completely different reasons for dismissing my critique (which they call my “letter”), perhaps because they chose to read it… backwards. “I will start,” they wrote, “from the end of the letter and will work my way up to the top.”

Indeed, their starting point wasn’t even the last sentence of my review, but the final half of that last sentence: “that lets subjects type with their eyes rather than their fingers—as hundreds of children around the world are successfully doing every day.”

My full final sentence was this:

Continue reading