In the Times today:
In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics or religion?
You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills by Yuval Harari, Tristan Harris and Aza Raskin
I have a couple of thoughts about this.
Number one: while it’s true that no human can hope to beat a computer at chess, it’s equally true that computers have had massively more experience playing chess than any human has had or ever will have:
AIs are very slow learners, needing years’ or even centuries’ worth of practice at playing chess or riding bicycles or playing computer games.
You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place by Janelle Shane, p161
Centuries.
How long does it take a human to become a champion at chess?
10 years?
Would a computer with 10 years’ training beat a human with 10 years’ training? Presumably not, or the AI wouldn’t need to continue training for centuries after it hits the 10-year mark.
Thought Number two brings me to a question I’ve been mulling.
As things stand, the AI is a terrible writer. AI prose is cohesive and grammatically correct, but it’s unreadable. I couldn’t get anyone in my orbit to read any of the three AI-written papers turned in to me last fall, which I needed someone to do because I wanted a second opinion. But no one would read! Everyone agreed to read, but no one actually did.
In reality, the fact that no one would read any of my papers (and these were short 5-paragraph papers) even after promising they would was the second opinion, because AI prose is unreadable, and my student’s papers were obviously unreadable, too, seeing as how no one was reading them. When you try to read the AI, your eyes roll up in your head.
Why is that?
The AI, we’re told, has been trained on “the Internet,” so maybe that’s the problem: the AI has absorbed the statistical properties of a lot of writing that’s dull or worse.
So what would happen if you trained the AI only on good or great writing?
Would its writing become good or great the way its chess playing does?
I have my doubts, but this thought experiment raises a second question: is there even enough good writing in existence to give the AI centuries of training?
Probably not.