No!
It can’t!
This probably shouldn’t have come as a surprise, but I’m cutting myself some slack because I have no idea what the AI is doing if it’s not reading. Given its ability to produce prose that’s both cohesive & clichéd, I assumed, or more accurately felt, it was doing something I call “reading” in order to achieve these effects.
That’s what we humans do to write prose that’s cohesive and clichéd. We read.
Katie tells me I’m late to the party: AI people have been writing about the barrier of meaning for years. I need to catch up.
Katie also wonders whether the AI can parse negatives. I wonder, too, though difficulty parsing negatives probably doesn’t explain why the AI got all three facets of Mike Rose’s experience completely wrong.
That said, maybe a story that hinges on mistaken identity (Rose’s IQ scores were mixed up with those of another student) is a kind of negative?
Parsing capability aside, the extreme and striking inaccuracy of the AI’s “reading” of “I Just Wanna Be Average” seems to come from its bias toward progressive education. Ergo: standardized testing bad.
But of course this explanation makes me wonder how the AI’s creators managed to build a non-comprehending entity that can so perfectly capture a fundamental tenet of progressive education.
How do you give the AI a bias?
Do you program bias in directly, or do you train it on carefully selected (and properly biased) materials? If the latter, why would it be “worth it” to the AI’s creators to spend time deciding which edu-texts a properly progressive AI should be trained on? How many people care about–how many people even know about–the education wars outside education?
How conscious and intentional did the AI’s creators have to be to produce the egregiously wrong paper my student handed in?
The whole thing is a mystery.
Artificial intelligence: other posts