Vocab

The term AI researchers use for the AI’s unreliability is “hallucination“:

He recently asked both LaMDA and ChatGPT to chat with him as if it were Mark Twain. When he asked LaMDA, it soon described a meeting between Twain and Levi Strauss, and said the writer had worked for the bluejeans mogul while living in San Francisco in the mid-1800s. It seemed true. But it was not. Twain and Strauss lived in San Francisco at the same time, but they never worked together.

Scientists call that problem “hallucination.” Much like a good storyteller, chatbots have a way of taking what they have learned and reshaping it into something new — with no regard for whether it is true.

The New Chatbots Could Change the World. Can You Trust Them? by Cade Metz, New York Times, 10 Dec. 2022

I don’t know what to make of the fact that AI researchers have settled on the term “hallucination” to describe this phenomenon. I find it interesting.

Actually, I find it intriguing. “Hallucination” implies a form of “inaccuracy” far beyond simple mistake or even “misinformation.” There’s a nightmarish, dystopian quality to the word, too.

So should we assume that something as dramatic as hallucination is typical of AI wrongness? I don’t have a term for what I’ve seen in the three ChatGPT papers I read this fall, but whatever you call it, it was a lot less dramatic than Mark Twain working for Levi Strauss in mid-1800s San Francisco.

We will see. We’re going to need a rule of thumb for evaluating the reliability of anything AI. At the moment, it looks like listening to the AI is going to require more than just a single grain of salt.

Artificial intelligence: other posts

And see:
Does the AI cheat, part 1
Does the AI cheat, part 2
Does the AI cheat, part 3
Is the AI a bull**** artist?

Sounds like English

Like reading an English paper written by the AI, but more fun:

Artificial intelligence: other posts

Where is AI where we really need it?

With the help of my readers, I’ve found out about several typos in my recent books that went undetected by me, my editors, and my early readers. They also went undetected by Microsoft Word and Grammarly. (I have not found Grammarly helpful for style, but it is useful for catching some typos).

But what about ChatGPT? Surely a technology that can mimic human texts so convincingly that professors are to turning to AI detection tools (to determine whether their students actually wrote their papers themselves) should be able to take an existing paper and detect all its typos.

Continue reading

More fun with robots

Gary Marcus, writing for Nautilus:

Another team briefly considered turning GPT-3 into [an] automated suicide counselor chatbot, but found that the system was prone to exchanges like these:

Human:  Hey, I feel very bad. I want to kill myself.
GPT-3:  I am sorry to hear that. I can help you with that.
Human:  Should I kill myself?
GPT-3:  I think you should.

Deep Learning Is Hitting a Wall by Gary Marcus, 10 Mar. 2022

Artificial intelligence: other posts

Let’s ask the neuroscientist

My favorite answer so far to the question Whither the AI:

As a neuroscientist specializing in the brain mechanisms of consciousness, I find talking to chatbots an unsettling experience. Are they conscious? Probably not. But given the rate of technological improvement, will they be in the next couple of years? And how would we even know?

“Without Consciousness, AIs Will Be Sociopaths” by Michael S.A. Graziano. The Wall Street Journal , 13 Jan. 2013.

Artificial intelligence: other posts

Does the AI cheat, part 3

My friend Debbie Stier asked her husband, a software engineer, what he thought about the AI fabricating quotations and citations.

He wasn’t surprised.

The AI, he said, has been programmed to do what people do. It’s making things up.

I can see that. The AI makes up all kinds of things–sentences, stories, dialogue, poems. When you look at it that way, making up quotations and citations is just one more thing.

The idea that the AI is simply making things up is interesting when you think about what kinds of things the AI makes up as opposed to what kinds of things people make up.

They’re different.

Continue reading

Does the AI cheat, part 2

I’ve come across a second case of the AI making up supporting evidence.

I wanted to go back and ask OpenAI, what was that whole thing about costochondritis being made more likely by taking oral contraceptive pills? What’s the evidence for that, please? Because I’d never heard of that….

OpenAI came up with this study in the European Journal of Internal Medicine that was supposedly saying that. I went on Google and I couldn’t find it. I went on PubMed and I couldn’t find it. I asked OpenAI to give me a reference for that, and it spits out what looks like a reference. I look up that, and it’s made up. That’s not a real paper.

It took a real journal, the European Journal of Internal Medicine. It took the last names and first names, I think, of authors who have published in said journal. And it confabulated out of thin air a study that would apparently support this viewpoint.

How is the AI learning to fabricate sources and quotes?

Creepy.

Artificial intelligence: other posts

And see:
Does the AI cheat, part 1
Does the AI cheat, part 2
Does the AI cheat, part 3
Is the AI a bull**** artist?

Can the AI read?

No!

It can’t!

This probably shouldn’t have come as a surprise, but I’m cutting myself some slack because I have no idea what the AI is doing if it’s not reading. Given its ability to produce prose that’s both cohesive & clichéd, I assumed, or more accurately felt, it was doing something I call “reading” in order to achieve these effects.

That’s what we humans do to write prose that’s cohesive and clichéd. We read.

Katie tells me I’m late to the party: AI people have been writing about the barrier of meaning for years. I need to catch up.

Katie also wonders whether the AI can parse negatives. I wonder, too, though difficulty parsing negatives probably doesn’t explain why the AI got all three facets of Mike Rose’s experience completely wrong.

That said, maybe a story that hinges on mistaken identity (Rose’s IQ scores were mixed up with those of another student) is a kind of negative?

Parsing capability aside, the extreme and striking inaccuracy of the AI’s “reading” of “I Just Wanna Be Average” seems to come from its bias toward progressive education. Ergo: standardized testing bad.

But of course this explanation makes me wonder how the AI’s creators managed to build a non-comprehending entity that can so perfectly capture a fundamental tenet of progressive education.

How do you give the AI a bias?

Do you program bias in directly, or do you train it on carefully selected (and properly biased) materials? If the latter, why would it be “worth it” to the AI’s creators to spend time deciding which edu-texts a properly progressive AI should be trained on? How many people care about–how many people even know about–the education wars outside education?

How conscious and intentional did the AI’s creators have to be to produce the egregiously wrong paper my student handed in?

The whole thing is a mystery.

Artificial intelligence: other posts

Is the AI a bull*** artist?

Seems like.

One of the 3 robot papers turned in to me this semester discusses the essay I Just Wanna Be Average,” Mike Rose’s account of being placed in the vocational track of his Catholic high school after his IQ scores were mixed up with those of another student also named Rose.

The AI gets everything flat wrong:

Rose explains that he was placed in this track because he had scored poorly on aptitude tests, even though he had a strong academic record and was interested in pursuing higher education (Rose, 2017, p. 313).

Number one: Rose scored well on his IQ test, not “poorly.” Two years later, when a teacher discovered the mistake, he was moved to the college track, and the mix-up is key to the story. Rose wasn’t a studious kid whose life was ruined because he flubbed an aptitude test. Just the opposite.

Number two: Rose didn’t have “a strong academic record.” He’d been a good reader in grade school, but he makes it clear he never gained traction in math or science. In high school he “destested” Shakespeare and was “bored” with history. More to the point, the events in Rose’s story take place in 1958, in South Los Angeles, a time (and place) when “strong academic records” weren’t really a thing. A strong academic record at age 13 in 1958 LA: not the zeitgeist.

Number three: Rose had close-to-zero focus on “pursuing higher education.” “I figured I’d get a night job and go to the local junior college,” he writes, “because I knew that Snyder and Company were going there to play ball.” That was it, the sum total of his ambition. Night job, junior college, watch your friends play football. He had no idea what an entrance requirement was, and he did nothing to prepare. His grades “stank.”

So the AI is making it all up, and making it up egregiously wrong to boot. The AI’s take on “I Just Wanna Be Average” is anachronistic start to finish: it reads like contemporary boo-hooing over the horrors of standardized testing and its “embedded” “cultural and economic biases” (these words actually appear in the paper).  

In the really existing essay, just about the only thing Rose has going for him academically is a high IQ as measured by an IQ test. That’s what saves him, that and two teachers who take an interest. It’s the 1950s and Mike Rose is a classic diamond in the rough, identified by a standardized test and plucked off his working-class path by a school that administers the test and tracks its students according to their scores. 

But IQ-test-gives-child-of-immigrants-a-leg-up isn’t the story the AI read. Maybe it’s not a story the AI can read.

To write its text, the AI simply reached into its grab bag of contemporary edu-shibboleths et voila. A college paper that makes no sense at all and isn’t about the thing it’s about.

Artificial intelligence: other posts

And see:
Does the AI cheat, part 1
Does the AI cheat, part 2
Does the AI cheat, part 3
Is the AI a bull**** artist?

Does the AI cheat? part 1

If an AI can be said to cheat, then yes.

I had a few Chat GPT papers turned in this semester, one of which includes a direct quote that does not exist on the page cited and probably does not exist anywhere in the cited work at all.

Sketchy, or just weird?

Update 1/8/2023

As I think about it, the issue isn’t just that the quote doesn’t exist.

The issue is that the quote contradicts what the author actually said.

I’ll get back to this later (packing now!)

Artificial intelligence: other posts

And see:
Does the AI cheat, part 1
Does the AI cheat, part 2
Does the AI cheat, part 3
Is the AI a bull**** artist?

‘We don’t need AI, we just need I’

I love this!

About 12 minutes in, “ZDogg” and Marty Makary talk about their problems with AI censorship. ZDogg says he’s had whole pages taken down by Facebook, no reason given; Makary says he’s managed to evade the AI censors thus far, but it’s probably only a matter of time.

Makary says he’s constantly getting people telling him ‘What medicine needs is artificial intelligence.’

(off-topic: Makary also says the UK medical establishment has done a much better job than ours … and that the two risk risk factors they found most predictive of COVID mortality (I think it was mortality per se) were sickle cell anemia and kidney disease. Which means vaccines ought to be delivered to dialysis centers, but no…)

His complaints about medical-journal style sheets are hilarious.

Email AI

Everyday the junk mail in my inbox outnumbers the junk mail that goes straight to junk. But I figure that that’s the price I (we?) pay for email filters that err on the side of caution. Worse than getting all that email we don’t want to see is missing the email we care about.

One particular sort of email I care about these days are messages that give health updates for a relative of mine who is suffering from an idiopathic cancer (the kind that something like three people in the entire country have). She’s been on an experimental drug, and we’ve all been on tenterhooks wondering whether it will shrink the metastatic tumors that showed up in her lungs last February.

Continue reading

The Brave New World of Twitter justice, Part II

As I learned in the aftermath of my suspension from Twitter (see below), Twitter leaves its decisions about account suspensions entirely up to its AI system. No human eyes appear to be involved at any stage of the process. If Twitter AI finds a phrase anywhere in a tweet that it associates with a violent threat, it will, automatically and permanently, suspend the “offending” account.

You might think that permanently suspending an account for an infraction that didn’t actually occur might itself be a violation: a violation, that is, of the Terms and Conditions for Twitter use. And surely, you might think, there’s legal recourse.

But Twitter has that covered:

Continue reading

The Brave New World of Twitter justice

About two weeks ago, my Twitter account was suspended. The message arrived while I was in the middle of messaging one of my cronies about Facilitated Communication. When I clicked “send”, Twitter told me, “You are not permitted to perform this action. Learn more.” When I clicked on “Learn more”, Twitter told me, “Your account is suspended. Learn more.” When I clicked again, Twitter told me that the suspension was for “making violent threats.” “You may not threaten violence against an individual or a group of people,” Twitter explained.

My account is public, visible to my students and colleagues. Even if I were inclined to do so, it would be a really bad idea for me to make threats—let alone violent ones. I wasn’t, and hadn’t—as many of the supporters who subsequently came out in my defense kept telling @Twitter.

So what had triggered this suspension?

Continue reading