As we saw in Parts I and II, the first reviewer of my critique of a paper on Facilitated Communication dismissed it for two reasons. I was, apparently, harping in a petty way on message passing tests (simple tests in which you ask the person being facilitated something that the facilitator doesn’t know the answer to). And I was, apparently, suggesting that there are no language disorders in which people can type things they can’t say.
Reviewer II come up with completely different reasons for dismissing my critique (which they call my “letter”), perhaps because they chose to read it… backwards. “I will start,” they wrote, “from the end of the letter and will work my way up to the top.”
Indeed, their starting point wasn’t even the last sentence of my review, but the final half of that last sentence: “that lets subjects type with their eyes rather than their fingers—as hundreds of children around the world are successfully doing every day.”
My full final sentence was this:
If the ultimate goal is to test authorship via eye gaze, why not do so directly, by using the kind of eye-tracking software that lets subjects type with their eyes rather than their fingers—as hundreds of children around the world are successfully doing every day.
Instead of providing some reason why this question is objectionable (petty? irrelevant?), Reviewer II uses my software allusion as a springboard for detailing their credentials:
Brain-computer interfaces (BCI) or brain-machine interfaces (BMI) such as those mentioned towards the end of the letter, were my specialty during my postdoctoral years at CALTECH. I developed algorithms and mathematical equations to help accelerate the training of such systems, to help people intentionally direct cursors and moving targets, even when feedback from the peripheral nervous systems signals was corrupted by noise, or absent altogether (due to severed spinal cord, etc.) My algorithms involving the detection of volition have been patented in the US and Europe. As such, I feel confident to speak about this matter. I have also published in peer-reviewed journals on the topic and written books whereby I explain the mathematics behind such seemingly “magic” act.
Impressive though all this sounds, I’m not sure that eye-tracking devices really count as brain-computer interfaces. Eye-trackers emit infrared light that is reflected in the user’s eyes and then detected by the eye-tracker’s cameras. As Wikipedia explains, “The information is then analyzed to extract eye rotation from changes in reflections.” In other words, the eye-tracking doesn’t penetrate beyond the eye into the brain.
But anyway, having established their credentials, Reviewer II goes on to claim that the training of BMI machines is analogous to the “learning/adaptation” process that happens between a person who is extending their index finger and the person who is holding up the letter board.
The process of learning to adapt the signals that one harnesses from the nervous systems of the person, e.g. the EEG brainwaves, the saccadic and pursuit eyes’ signals, the bodily kinematics or EMG biorhythms, the cortical or subcortical spikes, etc. and the external object’s motion (e.g. cursor to jump to a letter, the robotic arm to point, the prosthetic hand to grasp, etc.) happen to follow similar steps as those between a human trying to communicate and another human holding a letter board to receive the action and return a consequential response.
Both endpoints (the eyes traces coordinated with the hand traces of the autistic person) and the handheld letter-board, follow biorhythmic motion trajectories that at a microscopic level inevitably learn to entrain. Simultaneous processes take place that allow that end-product to most of the time be probabilistically correct. These processes are intentional and spontaneous in nature. There is a balance that must develop, such that initially the machine (in this case the facilitator) is in fact doing all of the work; followed by a hybrid phase whereby machine (facilitator) and user (eyes-hand traces in this case) take turns in performing the task, and finally 100% the user (the eyes-hand traces in this case) dominate. At that point, the user is performing the task autonomously and independently.
The learning – adaptation process relies of subtle variations of the signals which probabilistically adapt towards the independent performance of the brain signals correctly (most of the time) outputting the desired outcome (the proper letter to form a phrase.)
It does not seem to concern Reviewer II that massive facilitator influence is built into this training process. Or that assistive communication devices, for example eye trackers, don’t, in fact, begin by “doing all of the work.” Or that none of this in any way resembles how people acquire the ability to provide appropriate answers to open-ended questions and to use grapheme-phoneme correspondences to spell them out.
But Reviewer II is optimistic that BMIs are the assistive/educational devices of the future for nearly a third of the autistic population, and credits none other than Vikram Jaswal for helping us get there:
These BMI/BCI systems are not ubiquitous just yet, but they will be some day owing to the type of research that Prof. Jaswal’s lab is doing. Furthermore, they will open a path to education to the 30% segment of the autism spectrum that has no proper alternative way to the existing methods today.
After carrying on a bit about how the nervous systems of ASD individuals “developed through different paths and rely on different mechanisms that we have yet to fully understand” and lamenting that “these minimally verbal folks are being forced to follow the same type of learning / teaching strategies as neurotypical children do”, Reviewer II grants me some credit for (apparently) bringing up BMIs at the end of my final sentence.
He then moves up a couple of sentences to a phrase at the beginning of my final paragraph.
But I’ll save his next set of remarks for next time.