In my previous posts on language processing via Artificial Intelligence, I discussed the inability of Twitter AI to distinguish joke-threats embedded within tweets from actual threats tweeted as such; the inability of apps like Siri and Alexa to respond appropriately to questions that haven’t been hard-coded into their systems; and the inability of email AI to distinguish emails from long-time correspondents sent to multiple people from actual spam.
But surely state of the art Artificial Intelligence is more accomplished?
A recent New York Times article on a new artificial intelligence system suggests that the answer is YES:
This new system, GPT-3, had spent those months learning the ins and outs of natural language by analyzing thousands of digital books, the length and breadth of Wikipedia, and nearly a trillion words posted to blogs, social media and the rest of the internet.
As a result of all this encyclopedic, in-depth “reading”
It generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs, all with very little prompting. Some of these skills caught even the experts off guard.
Several blog posts generated by GPT-3 inspired sixty viewers to subscribe to the blog, with only a few suspecting that the posts were machine-generated. Indeed:
One of the blog posts — which argued that you can increase your productivity if you avoid thinking too much about everything you do — rose to the top of the leader board on Hacker News, a site where seasoned Silicon Valley programmers, engineers and entrepreneurs rate news articles and other online content. (“In order to get something done, maybe we need to think less,” the post begins. “Seems counterintuitive, but I believe sometimes our thoughts can get in the way of the creative process.”)
GPT-3 can also instantaneously answer questions like “How do we become more creative?” with responses like this one:
I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more you get exposed to different people, to different opportunities, to different places and to different challenges. And the more diverse that is, the more likely you’ll be to be able to put the dots together to form something new. And in many ways, I think if you want to be creative, you have to go for it. If you want to be a writer, you have to write, if you want to be a musician, you have to create music, if you want to be a comedian, you have to create comedy, and the more you create, the more likely it is that you’ll stumble onto some interesting stuff.
All this intelligence is an emergent property of a system that was designed, based on all its encyclopedic, in-depth “reading”, to do, as the Times puts it, “just one thing”:
predict the next word in a sequence of words.
So just how far can predicting the next word get you? The answer would appear to be pretty darn far.
But here, to me, is the real question. How much actual linguistic understanding lies beneath GPT-3’s tweets, poetry, summaries, answers to trivia questions, language translations–as well as its above ruminations about productivity and creativity?
Stay tuned for… Part II.