The Brave New World of Twitter justice, Part II

As I learned in the aftermath of my suspension from Twitter (see below), Twitter leaves its decisions about account suspensions entirely up to its AI system. No human eyes appear to be involved at any stage of the process. If Twitter AI finds a phrase anywhere in a tweet that it associates with a violent threat, it will, automatically and permanently, suspend the “offending” account.

You might think that permanently suspending an account for an infraction that didn’t actually occur might itself be a violation: a violation, that is, of the Terms and Conditions for Twitter use. And surely, you might think, there’s legal recourse.

But Twitter has that covered:

Continue reading

Artificial Intelligence–does it really understand what we’ve said?

Today I had my monthly zoom meeting with my linguistic colleagues, and we found ourselves talking about artificial intelligence. One of my colleagues, Debbie Dahl, recently wrote, along with Christy Doran, an editorial for Speech Technology Magazine entitled “Does your Intelligent Assistant Really Understand you?”

To address this, they put together a list of queries of the sort one might ask of an intelligent assistant like, say, Siri or Alexa.

Here are some examples of what happened.

Continue reading

We’re now live at ResearchEd2020!

This year’s conference is online, and our talk, Using intuitive “information-integration” learning to teach language to autistic and L2 students online has just gone live.

Catherine teaches us about the two learning systems (implicit vs. rule-based learning), with a fascinating discussion of the Iowa Gambling Task. Turning to my SentenceWeaver program, we then talk about how the two learning systems relate to the syntax and pragmatics of questions and pronouns.  We also show some videos of the program in action, including of the brand-new animated Pronouns Module, and of Catherine’s non-verbal son acing his way through the Questions Module.

The video should be available indefinitely.

The Philadelphia Inquirer no longer publishes letters on line

…so I’ll have to do that myself:


And, while I’m at it, I might as well re-post it in a searchable format with live links:

To the Editor,
Your recent Op-Ed, Americans with Disabilities Act should cover autism, too, was accompanied by a highly misleading photograph. It shows a boy pointing to letters on a board that is held up by another person. The same photograph has appeared elsewhere as an illustration of Rapid Prompting, also known as Spelling to Communicate. Rapid Prompting is a form of what’s known as “facilitated communication.” Based on a redefinition of autism as a sensory-motor disorder, Rapid Prompting lacks a solid evidence base. Particularly worrying are questions about whether those undergoing Rapid Prompting are actually doing the communicating. A large body of research indicates that the facilitators subconsciously direct the typing—for example, by shifting the board and deciding when a letter has been selected. In using this photograph as your illustration of autism, you unintentionally communicate that the methodology it depicts is generally accepted and supported.


I made a *HUGE MISTAKE* the other week!

I had gotten the impression that a fatally-flawed Eye Tracking study I blogged about below–the one written by Dan Willingham’s colleague at UVA that supposedly showed support for a form of facilitated communication in autism–was published in the prestigious journal Nature.

It turns out this article was instead published in a completely different publication under the umbrella: a publication called “Scientific Reports.”

I found this out when I tried to submit a letter to Nature. Nature wrote back saying that they don’t accept letters about articles in publications other than Nature. It took me a while to figure out that there are three separate entities all involving the name Nature.

There’s the journal; the publisher of dozens of distinct journals, and > Scientific Reports.

And, as it turns out, there are lots of differences between Nature and Scientific Reports. Scientific Reports accepts 56% of submissions; Nature accepts 8%. Scientific Reports charges authors thousands of dollars to publish ($5380 for US authors), and allows them input on who should and shouldn’t review their work.

Scientific Reports has a history of retractions, including, so far in 2020, of a paper claiming the sun causes global warming, one claiming that cell-phone-induced neck-bending causes people to grow horns, and one that was plagiarized from the BA thesis of a Hungarian mathematician.

I’m not sure how much Scientific Reports charges to publish letters to the editor, so as far as the letter I mistakenly wrote to Nature (as in the journal Nature) goes, I’ll do what I did with the one I wrote to the Chicago Tribune on its pro- Rapid Prompting Method piece from early January, which also went unpublished, and post it publicly

To the Editors,

I’m writing with concerns about your article “Eye-tracking reveals agency in assisted autistic communication” (Jaswal, V.K., Wayne, A. & Golino, H. Sci Rep 10, 7882 (2020).

One concern relates to the authors’ justifications for testing the agency of what they call “assisted communication” indirectly, via eye movements, rather than directly, via a message-passing test—the gold standard for establishing authorship.

In a message-passing test, the researcher prompts the subject and/or asks him a question while the assistant, or facilitator, is out of the room. The facilitator then returns and facilitates the subject’s response. If the response is appropriate, message-passing has succeeded.

The authors suggest that, in the case of non-speaking children, message-passing may fail for reasons that have nothing to do with agency. Their claim:

“Children who can talk receive years of prompting and feedback from adults on how to report information their interlocutor does not know, the essence of a message passing test.”

There are several problems with this statement. First, message-passing involves information that is unknown to the facilitator, not to the interlocutor (the researcher). Second, the statement suggests that typical three-year-olds don’t yet know how to talk with people about things that happened while they were out of the room. Third, as the study itself reports, while no participant “was reported to be able to have a ‘to and fro’ spoken conversation involving turn-taking or building on what a conversational partner had said earlier,” all but one “was reported to be able to speak using short phrases or sentences.” (What would cause these individuals to become more conversational when typing on a letter board with an index finger is left unexplored).

My other concern is the letter board, which could have been placed on a stationary stand, but instead is held up by the assistant. As we see in the article’s videos, the board shifts around significantly during typing.

A shifting board makes it hard to draw reliable conclusions about intentional eye fixations or about intentional letter selection—regardless of how high tech the head-mounted eye tracker and video processing software are.

Of course, as far as the authors go, if what you’re after is publicity for, say, an un-disclosed for-profit operation, all that matters is that you get the word out to lots of potential customers before you get scrutinized by actual peer review and ultimately retracted. It could be that spending over $5000 dollars to promote Rapid Prompting Method is a very worthwhile investment.

Especially given how much money desperate parents are willing to pay for anything that appears to boost the communicative potential of their autistic children to the degree promised by RPM.

Facilitating communication with Dan Willingham

I’ve long respected Dan Willingham’s work, but I’m concerned about a study he tweeted favorably about on twitter this week.


I and others tweeted some criticisms, and Dan posted a rebuttal on his blog. In this post, which I’ll also share on twitter, I’m going to respond, point by point, to that rebuttal.

First, a few notes about my respect for Dan and his work. When his first book came out, I gave it a 5-star review on Amazon; I linked to an article of his on phonics here just 3 months ago; and I’ve had cordial, even friendly, exchanges with him over the years.

However, the study Dan cited has raised red flags for me. Its object of investigation—Rapid Prompting Method—is similar to a discredited method called “Facilitated Communication” (FC). Both have been used with minimally-verbal children with autism, and both are things I’ve blogged about, most recently here.

RPM has never undergone the rigorous testing that FC has, and a number of us, long familiar with the litany of non-rigorous tests, anecdotes, and unsubstantiated claims that purportedly support RPM, took a look at who was involved in this study and immediately had concerns.

One concern: the conflicts of interest of the first author, Vikram Jaswal. First, there’s his long-time collaboration with the person who runs the clinic, whose teaching method (a form of RPM) the study investigated and drew favorable conclusions about. Second, there’s the fact that this method is used with Jaswal’s autistic daughter. From an article in the Washington Post:


Another concern: Jaswal’s collaborator, whom various indicators suggest was also the assistant overseeing the experiment. She–I’ll avoid using her name–has a history of claiming credentials she did not have, including in her dealings with paying clients: dealings relevant to the questions addressed by the experiment.

Unimpressed with our remarks, Dan ultimately tweeted:


Soon a number of threads spun out containing discussions of the study’s methodology and data, including this one from @24shaz. @angelrabanas analyzed one video in detail and posted her analysis on YouTube.

So I pinged Dan, and a day or two later he notified us that that he’d put up a post on his blog in response entitled “Responding to a Study You Just KNOW Is Wrong.”

Dan begins his post with a discussion of the learning styles controversy, and how educators who think they are “in the know” will pounce “with glee” on anyone who mentions learning styles. He goes on to say:


This made me wonder whether any of us had said anything bullying or snobbish about Jaswal et al or their studies. So I went through the various threads, and the only things I found that I thought might possibly be interpreted as snobbish or bullying were these:


Anyway, Dan goes on to explain how if you uncritically dismiss disproven theories you might miss new developments. As an example of a new development that he has not missed, Dan cites new research on learning styles theory and notes that his views have changed as a result.

I’m familiar with Dan’s skepticism about learning styles, and so I was intrigued to hear he’d changed his mind. I followed the link he gave to his new views, read through to the conclusion, and this is what I found:


But anyway, back to Dan’s blog post. Dan next turns to Jaswal et al’s paper:


Missing from this discussion is the most likely way that cuing could occur: not through a distinct cue for each of the 26 letters of the alphabet, but through the movement by the facilitator of the letter board. Such movement is a regular occurrence in videos of RPM, including those that are included in this study.

In particular, the facilitator may (unwittingly) shift the letter board such that a particular letter approaches the subject’s finger and/or enters the path of his/her eye gaze. To pick up on such cues, the subject does not look up at the assistant.

A moving letter board also makes it hard to draw reliable conclusions about intentional eye fixations or about typing rhythm.

Anyway, moving on, Dan turns to some of our concerns. Noting (correctly) that I mischaracterized Jaswal’s relationship to something called “The Tribes” (thus mischaracterizing the precise nature of Jaswal’s collaboration with clinic’s director), he turns to a tweet by Jason Travers, observing that:


Dan leaves it at that. But Jason is making an important point. Yes, he’s asking for a different experiment, but not one that differs in the ultimate questions being explored (i.e., the authorship of the messages). Rather, Jason is asking for an experiment that addresses these same questions using a more rigorous, well-established methodology: one where the possibility of facilitator influence is eliminated, as it is in a message passing test.

Jaswal et al attempt to explain why they think message-passing tests are problematic, making some claims that anyone with any background in language acquisition and linguistic pragmatics would instantly recognize as absurd:


Years of prompting and feedback are needed before children are able to convey new information? Tell that to the parent of a young toddler! Then there’s the fact that all of the participants, although they’re called “nonspeaking,” can talk—just not fluently and interactively.


What would cause these kids to be more fluent and interactive when typing on a letter board with an index finger? That is a question that Jaswal et al leave unexplained.

Even if Jaswal et al had good reasons for rejecting message passing, they’d still need to explain why they didn’t take other influence-reducing steps–like blindfolding the assistant and/or placing the alphabet board on a stationary stand. Such options are never even mentioned.

Dan does not discuss these huge design flaws. Nor does he address our concerns about how a moving board (1) provides cues that don’t involve subjects looking up at the assistant and (2) seriously warps the eye gaze results. Nor does he address various concerns we had about how the data was coded.

Instead, having dispensed with Jason, Dan then reacts to two tweets that, he grants, do address the data and its methods.


I’ve reviewed these and other tweets of @24shaz and @angelrabanas re the videos, and as far as I can tell, Dan’s reaction here is a non sequitur that does not rebut their critiques. Doing that convincingly would require quite a bit more discussion.

Dan then turns to a tweet I posted:


So I went back to the methods section to see if I had missed something:


There’s nothing here, so far as I can tell, that’s relevant to the issue of oral cues being delivered ahead of letter selection. Nor does this issue come up anywhere else in the methods section.

Having dispensed with the above tweets, Dan turns to one regarding the study’s small sample size and explains how these are routine in neuropsychology. He does not address concerns about how the subjects were non-randomly selected–they were hand-picked by the clinic–or about the grounds for eliminating one of the subjects.

Dan then dispenses with the rest of our tweets, among them many more that focused on methodology and data (and a mysteriously missing video) with this:


Alluding to his opening remarks, Dan then contrasts his take on our tweets with his take on his openness to revising his views on learning styles.


He goes on to say that people would be best advised to just ignore studies that they aren’t going to bother to read and that are outside their areas of expertise:


Regarding the second point, on expertise, what’s most relevant here, as far as our critiques go, is the expertise involved in evaluating RPM and FC videos for signs of manipulation. Some of us have spent many hours doing this—perhaps more hours than this paper’s authors, reviewers and advisors have.

Regarding the first point, about bothering to read things, the same applies to Dan himself. If he has the feeling that a conclusion (say one regarding Jaswal’s paper) is probably wrong, but doesn’t want to take the time to properly engage with the arguments, he might be better off ignoring it.

Incidentally, if Dan has a feeling that Jaswal’s critics are probably wrong, he has left out one possible reason why: he and Jaswal are colleagues in the same department at UVA, and Dan is mentioned in the paper’s acknowledgements:


Dan concludes with this:


If Dan were to inform himself about the harsh and abusive practices in both FC and RPM (evident in some of the videos of kids who are subjected to it), and about the linguistic and educational opportunity costs that they potentially impose, and about the evidence of greed and malice within the FC/RPM industrial complex, he might hesitate to interpret our passionate concern as nothing more than righteous indignation.

Facilitating Un-facilitated Communication in Autism

My two hour talk on this charged topic is going live at 5:00 PM EST today, available any time after that:

There’ll be a live q & a on Monday, and I’m hoping someone will bring up a paper, just published in Nature of all places, that appears to provide empirical support for a particular type of facilitated communication:

I’ve had an… interesting exchange with Dan Willingham on twitter about paper:


Is Structured Word Inquiry the answer to America’s reading woes? Part V

At this point, with five recent SWI critiques from Greg Ashman, writing yet one more myself feels like beating a dead horse. In terms of SWI publications, much of Ashman’s criticism has focused on a paper that has come out since I started this series: Bowers’ recent paper “Reconsidering the Evidence That Systematic Phonics Is More Effective Than Alternative Methods of Reading Instruction“. Ashman’s criticisms are quite thorough, and I have nothing to add to them.

Instead, I’ll return to the older Devonshire et al (2013) paper—a paper that discusses efficacy data that supposedly favors SWI over phonics. This paper compares 1st and 2nd graders who spent 6 weeks exposed to 15-25 minutes daily of SWI as opposed to those got “standard classroom” phonics instruction, and finds that the SWI intervention improves word reading scores. While this is reasonable grounds for further investigation, it’s far from the kind of study needed to justify a replacement of systematic phonics with SWI. For one thing, all of the students in the study received phonics instruction in addition to SWI. For another, the comparison involved an instructional time frame of only 6 weeks. Finally, the “standard classroom” phonics being compared to SWI can mean just about anything, including watered-down, unsystematic phonics instruction of the sort that has failed many kids over many years.

To answer the question of whether systematic phonics should be replaced with SWI, Continue reading

Basic grammar vs. “school grammar”

I still need wrap up my Structured Word Inquiry series (from last November!) with at least one more post, but some of the more recent twitter chatter on SWI has brought up a broader issue that I thought I’d address first. That would be the question of which aspects of grammar actually need to be taught to students who are native English speakers.

To address this question, it’s useful to draw a distinction between “basic grammar” and “school grammar.”

Basic grammar is the stuff that native speakers, assuming they don’t have language impairments/autism, pick up incidentally without formal instruction. This includes everyday vocabulary, word order, and word endings (morphology), and syllabification. Absent language impairments, native speakers, do not, for example, need to be taught that “crumb” and “crumbs” and “do” and “does” are related, or that we say “no bananas” rather than “no banana”–contrary to what some SWI proponents have suggested on twitter:

Continue reading

Is Structured Word Inquiry the answer to America’s reading woes? Part III

What B & B present as SWI’s greatest feature—the excitement of an explicit, inquiry-based approach to word recognition, is, arguably, its greatest liability. The more a child’s conscious attention is directed to the morphological structures and etymologies of individual words, the less room it has to attend to the overall meanings of phrases and sentences. The whole point of reading instruction is for word identification to quickly become automatic, and learning by rote what phonics presents as irregularities is arguably a more efficient pathway than deliberating generating hypotheses and tests for each newly encountered word.


After all, when it comes to reading, word identification is a means to an end; not an end in and of itself. Given this, the parallel B & B draw between acquiring reading skills and acquiring astronomy knowledge is faulty: if I want to learn astronomy, I want to be able to read an astronomy textbook without being bogged down and distracted by morphological word families and etymological histories. Indeed, even if I’m reading a book about morphology and etymology (a better analogy to an astronomy class is a linguistics class!), I still don’t want to get bogged down by a possibly ingrained habit of attending to the morphological and etymological properties of every single word I’m reading in the process.

And even if a phonics-based approach to reading, complete with the rote learning of what phonics considers irregularities, is a lot less fun than SWI, mastery of the process makes reading a lot less effortful a lot more quickly. Reduced effort, in turn, frees the mind for greater engagement with the actual content of texts than what is possible via SWI’s approach to word recognition.

It’s worth noting at this point that children are especially good at the rote learning of irregularities: look no further than language acquisition. The morphological building blocks of language—those roots, prefixes, and suffixes—involve arbitrary mappings between spoken sound and semantic meaning, and children are famously expert in “fast mapping” these correspondences. Compared to the number of arbitrary mappings that children learn in acquiring spoken language, the number of arbitrary mappings that they must learn once they’ve advanced to phonics is minuscule. Recall, again, the commonalities of “to”, “too” and “two” vs the chaos of “togh”, “gar” and “blim.”

B & B’s criticism of implicit approaches to word identification, recall, is that “in a completely arbitrary world, no generalization is possible.” But through the prism of phonics, for all the letter patterns it treats as exceptions, the English writing system is far from chaotic.

Could SWI still be a viable alternative route to reading–offering, for all the downsides of explicit hypothesis generation–a strategy that’s superior to phonics, at least for some students?

When it comes to the viability of SWI, particularly for novice readers encountering unfamiliar printed words, the devil is in the details. Stay tuned for part IV.

Is Structured Word Inquiry the answer to America’s reading woes? Part II

So is SWI the answer to the nation’s reading problems? In particular, is it a better alternative to phonics?

Let’s first return to biggest purported problem with phonics—namely, its inability to handle what it calls spelling irregularities. Let’s look, in particular, at the difficulty purportedly posed by homophones like “to”, “too”, and “two.” B & B claim that “If the prime purpose of spellings is to encode sounds, we should expect homophonic words to be spelled the same.” (p. 128). And “to”, “too”, and “two” are certainly not spelled the same. But neither are they spelled completely differently. They are not, for example, spelled “togh”, “gar” and “blim”. As even a cursory comparison of “to”, “too”, and “two” makes clear, their spellings have more commonalities than differences—precisely because these spellings are largely (and arguably primarily) based on their pronunciations. Indeed, all sets of homophones overlap significantly in the details of their spellings—some minimally (“heal”, “heel”; “grown”, “groan”).

It’s also worth noting that, while B & B are correct that English has many (indeed hundreds of) homophones, the overwhelming majority of English words aren’t members of homophone families.

The other big problem with phonics, according to SWI, is that it overlooks that spellings encode meaning as well as sound. But how big a problem is this when it comes to actual comprehension? After all, we have no difficulty understanding spoken language. When we hear a word that sounds like “sign” or “sine”, context tells us whether it denotes a street sign or a trigonometric function. Generally, homophones disambiguate through context. True, students routinely have trouble when it comes to spelling common homophones—confusions of “there”, “their”, and “they’re” are as ubiquitous as they are alarming—but this is not an issue for reading. No student is going to misread “they’re” simply because they often misspell it as “their.”

What about all those common monosyllabic words with irregular spellings? Yes, if one follows a strictly letter-to-sound-based route one will theoretically mispronounce them. But are these the sorts of words that are commonly mispronounced by actual children? How many children, even if all they’ve had for reading instruction is SWI-free phonics, persist in mispronouncing “do” as “doe”, “are” as “air”, “though” as “thoug”, “laugh” as “log” or “react” and “reekt”? As B & B note, it’s the high frequency words that tend to be irregular in their phonics (and also, I would add, in their morphology): this makes them especially suitable to implicit learning mechanisms (subconscious learning through high frequency exposure). Does SWI, with its non-implicit framework, really have a more efficient way of teaching their correct pronunciations? I’ll return to this question later on.

Furthermore, even when such words are mispronounced, the mispronunciations often provide sufficient clues as to their actual pronunciations. A child who reads “Do you want a cookie” as “Doe yow wannt ay kookie?” on the first pass may well be able to self-correct—and, through repeated trials, internalize those corrections to the point where they automatically override the mispronunciations.

Beyond the purported downsides of phonics, what about what some proclaim as SWI’s greatest feature: the excitement of an explicit, inquiry-based approach to word-recognition?


Stay tuned for Part III.

Oral vs. Written Language–Two Talks on Twitter

In the last 24 hours, I’ve participated in two different but intersecting discussions on Twitter—one on phonics, the other on autism. Their point of intersection: the question of oral vs. written language.

First, Phonics

The phonics discussion was one I couldn’t help jumping into. A distinguished education professor and specialist in reading instruction dismissed someone’s linguistically accurate observations about consonant-vowel-consonant (CVC) patterns by telling them they should take a class in linguistics. I’ve taken many classes in linguistics, so I piped in as follows:
Continue reading

After 100+ autism interviews, it’s time to debrief

I’m finally coming up for air after an intensive autism project funded by National Science Foundation. We had seven weeks to conduct at least 100 interviews–mostly with parents of autistic kids and with autism-focused teachers and therapists. The unrelenting stress of those seven weeks (which also involved weekly homework, lectures, and presentations, two trips to Boston, and a boot-camp ethos throughout) reminded me of the unrelenting stress I felt during the most difficult eras of J’s childhood.

And the difficulty I found in tracking down autism parents made me wonder whether autism is quite the epidemic people say it is.

My specific target was parents of children somewhere in the middle of the autism spectrum: kids who can recognize and produce at least a few spoken and written words, but who continue to struggle at least to some extent in putting those words together into grammatical phrases and sentences.

In the end, I spoke with about 40 parents, just barely enough to meet our weekly quotas and not get yelled at. Actually, the fear of being yelled at—funny how that doesn’t fade away with age!—was ultimately a good thing, as it resulted in some really interesting interviews.

Here are my main takeaways (some of these will be familiar to anyone familiar with autism):
Continue reading