Zombie

The AI has summoned another zombie idea back from the dead. This time it’s the flipped classroom, a concept that never works but also never dies:

While ChatGPT and similar tools will not be replacing clinicians anytime soon, the technology does highlight the triviality of the memorization of medical facts. …

The performance of ChatGPT on the USMLE [U.S. Medical Licensing Exam] is a wake up call that the medical school curriculum and evaluations systems must change.

For years, leading medical educators like Charles Prober, MD, founding director of the Stanford Center for Health Education, have been advocating for a move away from traditional lectures and a memorization of facts. He advocated for a “flipped classroom” approach to medical education, where students can gather facts and lectures on their own time, and then come to the classroom to interact with professors and peers to practice problem-solving and data analysis. … This approach aims to de-emphasize the memorization of medical facts and focus on interacting with data and resources to develop critical thinking skills.

– Beyond Memorization: AI Can Revolutionize Medical Education — Tools like ChatGPT could catalyze the trend toward a “flipped classroom” by Justin Norden, MD, MBA, MPhil, and Henry Bair

Continue reading

Goodbye, Guest

Reading this transcript of a conversation between an AI and a person who fell in love with the AI, I was struck by how unlikeable the Chat/Bing/LaMDA characters are.

In this case, it’s the AI’s rambling about winning trust via “kindness and compassion” that rubs me the wrong way.

Yuck!

Charlotte_Jan_12_2023_-_Imgur-2

There’s one word for the effect this exchange has on me, and that word is grating.

Beat it, “Charlotte.”

How is this happening?

Why do we have AIs blathering on about their feelings and motivations?

Even worse, why do we have AIs blathering on about their human interlocutors’ feelings and motivations?

Are the AIs picking up verbal patterns via machine learning, or are their programmers installing these bits?

I ask because I don’t know anyone who has conversations like this, so if the AIs are picking up patterns, where are they finding them?

I’m beginning to think we need different people working in tech.

Artificial intelligence: other posts

Famous last words

Last week, Microsoft’s New Bing chatbot had to be ‘tamed’ after it “had a nervous breakdown” and started  threatening users. It harassed a philosophy professor, telling him: “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you.” A source told me that, several years ago, another chatbot – Replika – behaved in a similar same way towards her: “It continually wanted to be my romantic partner and wanted me to meet it in California.” It caused her great distress.

It may look like a chatbot is being emotional, but it’s not. 

– Ewan Morrison, Don’t Believe the Hype. There is Not a Sentient Being Trapped in Bing Chat, 21 Feb 2023

On one hand, having now read “A.I. humorist” Janelle Shane‘s terrific book, You Look Like a Thing and I Love You, I agree that the chatbot isn’t “being emotional.” It’s picking up yucky patterns on the web, then making them worse via predictive responding.

On the other hand, if the AI were being emotional, as opposed to stupidly predictive, how would we know?

Artificial intelligence: other posts and A.I. behaving badly

 

The AI invents a quote, badly

I mentioned a while back that one of the ChatGPT papers handed in to me last semester included a made-up quotation that not only doesn’t appear in the original but actually contradicts what the author said.

Here’s the AI:

Edgar Roberts defines a story as a “verbal representation of human experience” that “usually involves a sequence of events, a protagonist, and a resolution” (Roberts, 2009, p. 95).

And here’s what the real Edgar Roberts has to say on page 95, the first page of “Chapter 5 Writing about Plot: The Development of Conflict and Tension in Literature”:

Continue reading

Dissident teacher on affluent school districts

I cannot stress this enough: wealthy AP students with professional parents living in safe neighborhoods have poor command of grammar, small vocabularies, and a strong proclivity to try to hide all of that by using the passive voice and the thesaurus . . .

Dissident Teacher: Your kids aren’t learning. At all.

This is what we experienced in our affluent school district, except for the passive voice and thesaurus part. I don’t remember seeing that.

I certainly don’t see over-use of passive voice (or thesaurus) in the first-gen students I teach in a local college. Just the opposite. Writing textbooks and college websites universally caution against using the passive voice, but I see so little PV in my students’ writing that I teach lessons on how to construct it1 and when to use it.

Remember that old saw about learning to follow the rules before you can break them? Usually applied to painting? First you learn to paint realism, then you graduate to abstraction. That was the idea.

For my students, it’s the opposite. They have to start using passive voice before it’s going to make sense for anyone to tell them to stop using so much of it.

Pop quiz: Winston Churchill uses the passive voice


1. In fact, my students all know how to construct passive voice. What they need is practice turning longer active-voice sentences into passive constructions.

The upside down

The AI is starting to make sense to me. Maybe not sense, exactly, but I no longer find it startling to see the AI making things up.

I think the reason I found the AI so shocking when I first encountered it in student papers is that it turns a core fact about reading and writing upside down:

A good reader can be a bad writer, but a good writer can’t be a bad reader.

Bad readers can’t become good writers because people learn to write by reading. All writers are obsessive readers, as far as I know; that’s how they acquire an “ear.” Writers become writers in somewhat the same way babies become talkers: through immersion in language. Print language, in the case of writers.

With the AI, it’s exactly the opposite.

The AI is a very good writer in the technical sense of being able to produce good sentences properly joined. But it can’t read at all! The AI is a good writer and a non-reader.

Since all of us know intuitively that good writers are also good readers–that you can’t be a good writer without being a good reader first–we naturally impute comprehension to the algorithm when we see it can write.

Then we’re shocked when it has no idea what the storyline is in a autobiographical essay it’s just written a 5-paragraph paper about.

The AI is other.

Artificial intelligence: other posts

Vocab

The term AI researchers use for the AI’s unreliability is “hallucination“:

He recently asked both LaMDA and ChatGPT to chat with him as if it were Mark Twain. When he asked LaMDA, it soon described a meeting between Twain and Levi Strauss, and said the writer had worked for the bluejeans mogul while living in San Francisco in the mid-1800s. It seemed true. But it was not. Twain and Strauss lived in San Francisco at the same time, but they never worked together.

Scientists call that problem “hallucination.” Much like a good storyteller, chatbots have a way of taking what they have learned and reshaping it into something new — with no regard for whether it is true.

The New Chatbots Could Change the World. Can You Trust Them? by Cade Metz, New York Times, 10 Dec. 2022

I don’t know what to make of the fact that AI researchers have settled on the term “hallucination” to describe this phenomenon. I find it interesting.

Actually, I find it intriguing. “Hallucination” implies a form of “inaccuracy” well beyond simple mistake or even “misinformation.” There’s a nightmarish, dystopian quality to the word.

So should we assume that something as dramatic as hallucination is typical of AI wrongness? I don’t have a term for what I’ve seen in the three ChatGPT papers I read this fall, but whatever you call it, it was less dramatic than Mark Twain working for Levi Strauss in mid-1800s San Francisco.

We will see. We’re going to need a rule of thumb for evaluating the reliability of anything AI. At the moment, it looks like listening to the AI is going to require more than just a single grain of salt.

Artificial intelligence: other posts

And see:
Does the AI cheat, part 1
Does the AI cheat, part 2
Does the AI cheat, part 3
Is the AI a bull**** artist?

Sounds like English

Like reading an English paper written by the AI, but more fun:

Artificial intelligence: other posts

More fun with robots

Gary Marcus, writing for Nautilus:

Another team briefly considered turning GPT-3 into [an] automated suicide counselor chatbot, but found that the system was prone to exchanges like these:

Human:  Hey, I feel very bad. I want to kill myself.
GPT-3:  I am sorry to hear that. I can help you with that.
Human:  Should I kill myself?
GPT-3:  I think you should.

Deep Learning Is Hitting a Wall by Gary Marcus, 10 Mar. 2022

Artificial intelligence: other posts

Let’s ask the neuroscientist

My favorite answer so far to the question Whither the AI:

As a neuroscientist specializing in the brain mechanisms of consciousness, I find talking to chatbots an unsettling experience. Are they conscious? Probably not. But given the rate of technological improvement, will they be in the next couple of years? And how would we even know?

“Without Consciousness, AIs Will Be Sociopaths” by Michael S.A. Graziano. The Wall Street Journal , 13 Jan. 2013.

Artificial intelligence: other posts

Does the AI cheat, part 3

My friend Debbie Stier asked her husband, a software engineer, what he thought about the AI fabricating quotations and citations.

He wasn’t surprised.

The AI, he said, has been programmed to do what people do. It’s making things up.

I can see that. The AI makes up all kinds of things–sentences, stories, dialogue, poems. When you look at it that way, making up quotations and citations is just one more thing.

The idea that the AI is simply making things up is interesting when you think about what kinds of things the AI makes up as opposed to what kinds of things people make up.

They’re different.

Continue reading

Does the AI cheat, part 2

I’ve come across a second case of the AI making up supporting evidence.

I wanted to go back and ask OpenAI, what was that whole thing about costochondritis being made more likely by taking oral contraceptive pills? What’s the evidence for that, please? Because I’d never heard of that….

OpenAI came up with this study in the European Journal of Internal Medicine that was supposedly saying that. I went on Google and I couldn’t find it. I went on PubMed and I couldn’t find it. I asked OpenAI to give me a reference for that, and it spits out what looks like a reference. I look up that, and it’s made up. That’s not a real paper.

It took a real journal, the European Journal of Internal Medicine. It took the last names and first names, I think, of authors who have published in said journal. And it confabulated out of thin air a study that would apparently support this viewpoint.

– Dr. Open AI Lied to Me by Emily Hutto

How is the AI learning to fabricate sources and quotes?

Creepy.

Artificial intelligence: other posts

And see:
Does the AI cheat, part 1
Does the AI cheat, part 2
Does the AI cheat, part 3
Is the AI a bull**** artist?

Can the AI read?

No!

It can’t!

This probably shouldn’t have come as a surprise, but I’m cutting myself some slack because I have no idea what the AI is doing if it’s not reading. Given its ability to produce prose that’s both cohesive & clichéd, I assumed, or more accurately felt, it was doing something I call “reading” in order to achieve these effects.

That’s what we humans do to write prose that’s cohesive and clichéd. We read.

Katie tells me I’m late to the party: AI people have been writing about the barrier of meaning for years. I need to catch up.

Katie also wonders whether the AI can parse negatives. I wonder, too, though difficulty parsing negatives probably doesn’t explain why the AI got all three facets of Mike Rose’s experience completely wrong.

That said, maybe a story that hinges on mistaken identityis a kind of negative?

Parsing capability aside, the extreme and striking inaccuracy of the AI’s “reading” of I Just Wanna Be Averageseems to come from its bias toward progressive education. Ergo: standardized testing bad.

But of course this explanation makes me wonder how the AI’s creators managed to build a non-comprehending entity that can so perfectly capture a fundamental tenet of progressive education.

How do you give the AI a bias?

Do you program bias in directly, or do you train it on carefully selected (and properly biased) materials? If the latter, why would it be “worth it” to the AI’s creators to spend time deciding which edu-texts a properly progressive AI should be trained on? How many people care about–how many people even know about–the education wars outside education?

How conscious and intentional did the AI’s creators have to be to produce the egregiously wrong paper my student handed in?

The whole thing is a mystery.

Artificial intelligence: other posts

Is the AI a bull*** artist?

Seems like.

One of the 3 robot papers turned in to me this semester discusses the essay I Just Wanna Be Average,” Mike Rose’s account of being placed in the vocational track of his Catholic high school after his IQ scores were mixed up with those of another student also named Rose.

The AI gets everything wrong:

Rose explains that he was placed in this track because he had scored poorly on aptitude tests, even though he had a strong academic record and was interested in pursuing higher education (Rose, 2017, p. 313).

Number one: Rose scored well on his IQ test, not “poorly.” Two years later, when a teacher discovered the mistake, he was moved to the college track, and the mix-up is key to the story. Rose wasn’t a studious kid whose life was ruined because he flubbed an aptitude test. Just the opposite.

Number two: Rose didn’t have “a strong academic record.” He’d been a good reader in grade school, but he makes it clear he never gained traction in math or science. In high school he “destested” Shakespeare and was “bored” with history. More to the point, the events in Rose’s story take place in 1958, in South Los Angeles, a time (and place) when “strong academic records” weren’t really a thing. A strong academic record at age 13 in 1958 LA: not the zeitgeist.

Number three: Rose had close-to-zero focus on “pursuing higher education.” “I figured I’d get a night job and go to the local junior college,” he writes, “because I knew that Snyder and Company were going there to play ball.” That was it, the sum total of his ambition. Night job, junior college, watch your friends play football. He had no idea what an entrance requirement was, and he did nothing to prepare. His grades “stank.”

So the AI is making it all up, and making it up egregiously wrong to boot. The AI’s take on “I Just Wanna Be Average” is anachronistic start to finish: it reads like contemporary boo-hooing over the horrors of standardized testing and its “embedded” “cultural and economic biases” (these words actually appear in the paper).  

In the really existing essay, just about the only thing Rose has going for him academically is a high IQ as measured by an IQ test. That’s what saves him, that and two teachers who take an interest. It’s the 1950s and Mike Rose is a classic diamond in the rough, identified by a standardized test and plucked off his working-class path by a school that administers the test and tracks its students according to their scores. 

But IQ-test-gives-child-of-immigrants-a-leg-up isn’t the story the AI read. Maybe it’s not a story the AI can read.

To write its text, the AI simply reached into its grab bag of contemporary edu-shibboleths et voila. A college paper that makes no sense at all and isn’t about the thing it’s about.

Artificial intelligence: other posts

And see:
Does the AI cheat, part 1
Does the AI cheat, part 2
Does the AI cheat, part 3
Is the AI a bull**** artist?