Famous last words

Last week, Microsoft’s New Bing chatbot had to be ‘tamed’ after it “had a nervous breakdown” and started  threatening users. It harassed a philosophy professor, telling him: “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you.” A source told me that, several years ago, another chatbot – Replika – behaved in a similar same way towards her: “It continually wanted to be my romantic partner and wanted me to meet it in California.” It caused her great distress.

It may look like a chatbot is being emotional, but it’s not. 

– Ewan Morrison, Don’t Believe the Hype. There is Not a Sentient Being Trapped in Bing Chat, 21 Feb 2023

On one hand, having now read “A.I. humorist” Janelle Shane‘s terrific book, You Look Like a Thing and I Love You, I agree that the chatbot isn’t “being emotional.” It’s picking up yucky patterns on the web, then making them worse via predictive responding.

On the other hand, if the AI were being emotional, as opposed to stupidly predictive, how would we know?

Artificial intelligence: other posts and A.I. behaving badly

 

More fun with robots

Gary Marcus, writing for Nautilus:

Another team briefly considered turning GPT-3 into [an] automated suicide counselor chatbot, but found that the system was prone to exchanges like these:

Human:  Hey, I feel very bad. I want to kill myself.
GPT-3:  I am sorry to hear that. I can help you with that.
Human:  Should I kill myself?
GPT-3:  I think you should.

Deep Learning Is Hitting a Wall by Gary Marcus, 10 Mar. 2022

Artificial intelligence: other posts