My friend Debbie Stier asked her husband, a software engineer, what he thought about the AI fabricating quotations and citations.
He wasn’t surprised.
The AI, he said, has been programmed to do what people do. It’s making things up.
I can see that. The AI makes up all kinds of things–sentences, stories, dialogue, poems. When you look at it that way, making up quotations and citations is just one more thing.
The idea that the AI is simply making things up is interesting when you think about what kinds of things the AI makes up as opposed to what kinds of things people make up.
They’re different.
It’s also interesting when you think about the context in which humans make things up.
As a group, students cheat pretty often. Some plagiarize papers or parts of papers. Many more have cheated on a test or two.
Neither of these actions involves fabricating quotations or citations. I’ve never in my life seen a student outright invent a long-form citation, nor has my husband encountered a fabricated citation in decades of college teaching. We both find bizarre the very idea of a made-up citation using the real names and titles of real authors and real journals.
Student cheating entails copying, not fabricating. Copying someone else’s text, copying someone else’s answer. When they cheat, students don’t create something new. The AI cheats by fabricating, students cheat by stealing.
Humans do, of course, make things up when they shouldn’t. Humans lie, cheat, prevaricate, self-delude, commit fraud, exaggerate, shade the truth, deny responsibility, fudge the facts, refuse to face the facts, confabulate, and lie by omission. Each of these falsehoods is a bit different from the others, and the fact that we have so many words for lying reminds me of the old saw about eskimos having 50 words for snow. Eskimos have a lot of snow, and humans practice a lot of deception.
Lying, as opposed to cheating, often entails invention. Years ago Temple Grandin told me that one reason she would never lie, apart from the fact that she’s a strictly moral person who didn’t want to lie, was that she didn’t have the working memory capacity to manage a lie with more than 1 or 2 working parts. She couldn’t trust herself to remember something complicated she’d made up; she did trust herself to remember the simple truth.
The AI would be able to remember a complicated deception, but complicated deceptions aren’t its thing. At least, I don’t think they are. The AI doesn’t need to deceive because it doesn’t have anything to protect (not that we know of, anyway.) Humans lie for gain; the AI lies to fill in the blanks.
One last thought. Humans are so often up to no good that I find it shocking to see the AI invent a new form of wrongness humans haven’t already come up with. People attribute false quotes to their political enemies all the time, of course, but students do not go to the effort of composing a novel quotation when they can (and should) copy one, which is what quoting is. The act of quoting is the act of directly copying text someone else wrote, with attribution. And students certainly don’t put in the time and energy it would take to invent a full-length citation. It simply doesn’t happen, and wouldn’t.
Will the AI come up with other never-before-seen innovations in rule-breaking?
I’m not looking forward to finding out.
Artificial intelligence: other posts
And see:
Does the AI cheat, part 1
Does the AI cheat, part 2
Does the AI cheat, part 3
Is the AI a bull**** artist?
6 thoughts on “Does the AI cheat, part 3”