As we saw in my last post on GPT-3, state of the art AI can write poetry, computer code, and passages that ruminate about creativity. It can also, with priming from a human being, emulate a particular style:
Before asking GPT-3 to generate new text, you can focus it on particular patterns it may have learned during its training, priming the system for certain tasks… If you prime it with dialogue, for instance, it will start chatting with you.https://www.nytimes.com/2020/11/24/science/artificial-intelligence-ai-gpt3.html
And if you it to write in the style of, say, Scott Barry Kaufman, it will do that.
But, as the New York Times notes, there are limits:
Here, as promised is the full text of my critique:
Reviewer II concludes (yes, there is, in fact, a conclusion to their critique) with the second half of my introduction:
This study, however, is based on faulty assumptions that undermine both its rationale and its conclusions: assumptions about test performance; about prerequisites for certain linguistic skills; about a disconnect between speech-based vs. typing-based communication skills; about what it would take for someone to point to letters based on cues from the person holding up the board; and about the study’s central premise: the agency of eye gaze.
Reviewer II views this not as a thesis statement to be supported in subsequent paragraphs, but as an accusation to be dissected as is:
Reviewer II next turns to the concept of message passing. Here’s what I wrote:
The most direct way to validate authorship is through “message passing” tests involving prompts that the facilitator has not seen ahead of time. For contact-based facilitation, studies dating back to the 1990’s indicate that facilitators do in fact guide typing.2,3,4 Results show that the overwhelming majority of typed responses—when they occur—are based on prompts that the facilitator witnessed, and not on prompts that only the typist witnessed, strongly suggesting that the facilitator is the actual author. As for assistance via held-up letterboards, practitioners have yet to participate in rigorous, published, message passing experiments.52. Moore, S., Donovan, B., & Hudson, A. Facilitator-suggested conversational evaluation of facilitated communication. Journal of Autism and Developmental Disorders, 23, 541–552 (1993).
3. Wheeler, D. L., Jacobson, J. W., Paglieri, R. A., & Schwartz, A. A. An experimental assessment of facilitated communication. Mental Retardation, 31, 49–59 (1993).
4. Saloviita, T., Lepannen, M., & Ojalammi, U. Authorship in facilitated communication: An analysis of 11 cases. Augmentive and Alternative Communication, 3, 213-25 (2014).
5. Tostanoski, A., Lang, R., Raulston, T., Carnett, A., & Davis, T. Voices from the past: Comparing the rapid prompting method and facilitated communication. Developmental Rehabilitation, 17, 219–223 (2014).
But for Reviewer II, message passing (MP) means something completely different. Rather than being a means (indeed the best means) to test authorship, it’s a means for interpreting the autistic person’s code.
Having cleared up any confusion about non-stationary letterboards, Reviewer II next moves backwards to this part of my critique of Jaswal et al’s paper:
The authors’ third claim is that the “unfamiliar experimental setting”, combined with “elevated levels of anxiety common in autism”, may help explain difficulties with message passing tests. The anxiety defense is belied by (1) the care taken in many of the tests to make subjects as comfortable as possible12 (in particular, there is no mounting of eye-trackers to subjects’ heads), and (2) the unlikelihood that anxiety would lead—let alone enable—subjects to type out something that the facilitator saw and the typist didn’t. As for “unfamiliar test setting” issues, the authors cite Cardinal et al’s (1996) study in which message passing performance improved over multiple sessions.13 Error rates, however, remained high, and the study has been criticized for several design flaws.14 13. Cardinal, D. N., Hanson, D. & Wakeham, J. Investigation of authorship in facilitated communication. Ment. Retard. 34, 231–242 (1996).
14. Mostert, Mark P. Facilitated Communication Since 1995: A Review of Published Studies. Journal of Autism and Developmental Disorders, Vol. 31, No. 3, 287-313 (2001).
Rather than explain why anxiety would enable subjects to type out something that the facilitator saw but that they themselves didn’t (e.g., by discussing ways in which anxiety might enhance autistic telepathy), Reviewer II invokes the Helsinki Accords:
Having established the depth of their credentials in Brain Machine Interface technologies and pointed out that swallowing food and pointing to letters involve different skills, Reviewer II next (actually, it’s hard to keep track of what’s next, as they’ve gone through my FC critique backwards) turns to my argument that:
Subtle board movements are evident in the study’s supplemental videos, and this highlights a final problem. Even if you accept the authors’ justifications for eschewing message passing tests, and even if you somehow rule out the possibility of board-holding assistants providing cues, a non-stationary letterboard calls into question the study’s central premise: the purported agency of the subjects’ eye gaze. Were subjects intentionally looking at letters, or were letters shifting into their lines of sight? That is something that no eye-tracking equipment, no matter how sophisticated, can answer.
The first part of their response (I’ll go through it in order, rather than backwards) is this:
Moving backwards from the final paragraph of my critique of Jaswal et al’s paper on Facilitated Communication, Reviewer II turns to my statement that:
Jaswal et al, moreover, fail to explain why the subjects were able to deliver the often lengthy responses to the open-ended questions reported in this study when pointing to letters on a letterboard via hunt-and-peck-style typing, but not when articulating words through speech (a much less time-consuming process).
Reviewer II responds by explaining that training people to articulate speech sounds involves techniques that differ from training them to point at letters. This, they suggest, is news to linguists like myself–as well as to the behaviorists who work with autistic children:
As we saw in Parts I and II, the first reviewer of my critique of a paper on Facilitated Communication dismissed it for two reasons. I was, apparently, harping in a petty way on message passing tests (simple tests in which you ask the person being facilitated something that the facilitator doesn’t know the answer to). And I was, apparently, suggesting that there are no language disorders in which people can type things they can’t say.
Reviewer II come up with completely different reasons for dismissing my critique (which they call my “letter”), perhaps because they chose to read it… backwards. “I will start,” they wrote, “from the end of the letter and will work my way up to the top.”
Indeed, their starting point wasn’t even the last sentence of my review, but the final half of that last sentence: “that lets subjects type with their eyes rather than their fingers—as hundreds of children around the world are successfully doing every day.”
My full final sentence was this:
The worst COVID reporting I’ve seen to date: 2 Companies Say their Vaccines Are Effective. What Does That Mean?
Here’s the deck:
You might assume that 95 out of every 100 people vaccinated will be protected from Covid-19. But that’s not how the math works.
In fact, that’s exactly how the math works, which you see if you read the story.
As we saw in Part I, one of the problems that Reviewer 1 had with my critique of a paper on Facilitated Communication was my skepticism about the existence of language disorders “that combine extant oral skills with pragmatics skills that only emerge during hunt-and-peck typing.” The reviewer said that I was implying that are no known disorders in which people can type things they can’t speak. (So much for my deaf friends and relatives, with whom–way back when–I used to correspond via TTY).
Reviewer 1 had just one other issue with my critique:
Much of the response appears fixated on message-passing experiments, which is a useful design, but cannot be seen as the only viable means of deriving useful information related to the question of autistic communicative agency. Jawal et al. (2020) point to a variety of reasons the results of prior message passing studies may not fully reflect the agency of all autistic individuals using spelling-based forms of communication. There are certainly points at which I feel Jaswal et al. could have done a better job of connecting the dots, but that alone hardly seems worth pointing out in an alternative publication.
Aside from the fact that message passing (e.g., asking the child a question that the facilitator doesn’t know the answer to) is the simplest, most direct way to test authorship (something that anyone who wants to can easily do at any time without any expensive machinery), “connecting the dots”, as I wrote, is only one of several problems with Jaswal’s article:
I recently received two reviewer responses to a recent paper by Vikram Jaswal that purports to find support for a form of Facilitated Communication known as Spelling to Communicate:
Reviewer 1 takes this paragraph of mine:
Jaswal et al, moreover, fail to explain why the subjects were able to deliver the often lengthy responses to the open-ended questions reported in this study when pointing to letters on a letterboard via hunt-and-peck-style typing, but not when articulating words through speech (a much less time-consuming process). The authors state that “all but one participant was reported to be able to speak using short phrases or sentences” but “none could respond verbally to open-ended questions of the type they were asked in this study.” There is, however, no officially recognized language disorder that combines extant oral skills with pragmatic skills that only emerge during hunt and peck typing.
And interprets it this way:
I’ve come full circle.
Last spring I was thinking I might never return to the classroom because SARS-COV-2 would always be around.
Eight months later here I am, asking permission to finish the term in person. My college having announced that, after Thanksgiving, classes will be remote, I find myself in the unexpected position of requesting reverse accommodations.
Everyday the junk mail in my inbox outnumbers the junk mail that goes straight to junk. But I figure that that’s the price I (we?) pay for email filters that err on the side of caution. Worse than getting all that email we don’t want to see is missing the email we care about.
One particular sort of email I care about these days are messages that give health updates for a relative of mine who is suffering from an idiopathic cancer (the kind that something like three people in the entire country have). She’s been on an experimental drug, and we’ve all been on tenterhooks wondering whether it will shrink the metastatic tumors that showed up in her lungs last February.
As I mentioned earlier, Twitter, having no legal or financial incentive to involve actual humans, appears to leave its decisions about account suspensions entirely up to its AI system. However, as it turns out, there is one way in which Twitter AI does invite human eyes in on the process.
Here’s how it works: