In one of my earlier posts on the limits of Artificial Intelligence, I reported on the difficulty AI has figuring out what “it” means. As measured by so-called Winograd schemas like the one below, even the most sophisticated AI performs at levels not much better than chance:
SENTENCE 1: “The city council refused the demonstrators a permit because they feared violence.”
QUESTION: Who feared violence?
A. The city council B. The demonstrators
SENTENCE 2: “The city council refused the demonstrators a permit because they advocated violence.”
QUESTION: Who advocated violence?
A. The city council B. The demonstrators
The problem is that even state of the art AI lacks worldly knowledge–including background knowledge about city councils and demonstrators and their typical concerns and plausible goals. Who might fear violence and feel a responsibility to prevent it? Who might advocate violence?
Of course, even the occasional human has been known to have deficits in background knowledge, and those deficits have increasingly been implicated in problems with reading comprehension. In general, if you lack the knowledge that a text takes for granted, you’ll struggle to make sense of anything the text asserts that’s based on that knowledge.
For example, if you don’t know about the functions of antimicrobial agents, or what bacterial resistance and immunosuppression entail, you’ll struggle to make sense of this paragraph (even if you happen to know words like “pathogenic” and “concomitant”):
Continue reading →