Not surprised to see this:
It makes sense people would see the AI as the language-arts equivalent of a calculator. I had the same thought myself.
But the analogy is wrong.
When you perform a calculation, you’re trying to find the answer, and the answer is always the same.*
Writing is different.
Sometimes, of course, with a writing task, the answer is always the same: it’s the same in the sense that everyone in the field knows what the answer is and agrees that it’s correct. Short-answer tests in knowledge-heavy fields, for example. The AI does well with those and, in producing the correct answer, functions much like a calculator:
I saw an economics professor make the same observation (wish I’d saved the link) about an AI-produced short answer in his field.
But when you’re writing to find an answer, or to express an answer better than you could just by talking, you’re creating an answer (or a rhetorical effect) that doesn’t already exist, and this is true even when you think you know the answer going in. The process of writing changes the answer, hence the truism that writing is thinking.
“You write to help yourself think better, then think to help yourself write better.” – Joseph Williams, Style: Lessons in Clarity and Grace
All the AI can do is string together whatever patterns it has detected, and the results are mostly mind-numbingly abstract and banal, and not infrequently egregiously wrong to boot, because the AI is nothing more than an occasionally hallucinatory conventional-wisdom generator. So let’s not even get started w/little mini child-size AI calculators in the classroom. Please, no.
Point is: most of the time, writing by machine is nothing like calculating by machine.
There is no royal road to the creation of prose a person would actually want to read and, afterwards, be happy that they did.
* This is not a reason for children to use calculators instead of learning their times tables.↩