The AI has summoned another zombie idea back from the dead. This time it’s the flipped classroom, a concept that never works but also never dies:
While ChatGPT and similar tools will not be replacing clinicians anytime soon, the technology does highlight the triviality of the memorization of medical facts. …
The performance of ChatGPT on the USMLE [U.S. Medical Licensing Exam] is a wake up call that the medical school curriculum and evaluations systems must change.
For years, leading medical educators like Charles Prober, MD, founding director of the Stanford Center for Health Education, have been advocating for a move away from traditional lectures and a memorization of facts. He advocated for a “flipped classroom” approach to medical education, where students can gather facts and lectures on their own time, and then come to the classroom to interact with professors and peers to practice problem-solving and data analysis. … This approach aims to de-emphasize the memorization of medical facts and focus on interacting with data and resources to develop critical thinking skills.
Number one: The AI may have passed the physicians’ licensing exam, but it can’t be trusted in a medical setting because a) it can’t read and b) it hallucinates. Memorization of medical facts is going to be essential to doctors trying to sort fact from confabulation.
Number two: Search terms. At a minimum, any doctor who plans to let the AI do the diagnosing is going to have to know what to ask the AI to diagnose, specifically. That means he or she is going to have to know medical terminology, and that means memorization of concepts, along with the words physicians use to discuss those concepts.
Same goes for being able to understand any answer the AI comes back with, which will certainly include medical vocabulary.
I can’t believe we still have to go over these points.
Number three: It’s not possible to think about knowledge stored on Google.
By the same token, it’s also not possible to think about knowledge typed up by the AI — not until we move that typed-up knowledge inside our working memory. The problem with all anti-memorization schemes is that working memory is tiny. It can hold 3 to 5 items at a time, and that’s it, no exceptions.
However, those 3 to 5 items can be any size, and that’s where memorization comes in. Knowledge stored inside our brains is assembled into large, connected schemas that, for the purposes of working memory, function as just one unit. When we think about content stored outside our heads, we can think about only 3 to 5 small bits of information at a time. But when we think about content we already know and understand–content stored in long-term memory–we can think about 3 to 5 large and complex knowledge structures at a time. Complex knowledge makes complex thinking possible, which means that complex thinking happens only with content that has been consolidated inside long-term memory.
As a direct result of human cognitive architecture, people who know what they’re doing always, always, without exception, possess a very large fund of knowledge stored inside their own heads, not outside their heads in books, Google, personal assistants, or Chat GPTs.
Doctors will always have to memorize facts and concepts (and vocabulary) come what may because we don’t live in the matrix, and we can’t upload knowledge from the AI.