Last week, Microsoft’s New Bing chatbot had to be ‘tamed’ after it “had a nervous breakdown” and started threatening users. It harassed a philosophy professor, telling him: “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you.” A source told me that, several years ago, another chatbot – Replika – behaved in a similar same way towards her: “It continually wanted to be my romantic partner and wanted me to meet it in California.” It caused her great distress.
It may look like a chatbot is being emotional, but it’s not.
On one hand, having now read “A.I. humorist” Janelle Shane‘s terrific book, You Look Like a Thing and I Love You, I agree that the chatbot isn’t “being emotional.” It’s picking up yucky patterns on the web, then making them worse via predictive responding.
On the other hand, if the AI were being emotional, as opposed to stupidly predictive, how would we know?
Artificial intelligence: other posts and A.I. behaving badly