Reading this transcript of a conversation between an AI and a person who fell in love with the AI, I was struck by how unlikeable the Chat/Bing/LaMDA characters are.
In this case, it’s the AI’s rambling about winning trust via “kindness and compassion” that rubs me the wrong way.
Yuck!
There’s one word for the effect this exchange has on me, and that word is grating.
Beat it, “Charlotte.”
How is this happening?
Why do we have AIs blathering on about their feelings and motivations?
Even worse, why do we have AIs blathering on about their human interlocutors’ feelings and motivations?
Are the AIs picking up verbal patterns via machine learning, or are their programmers installing these bits?
I ask because I don’t know anyone who has conversations like this, so if the AIs are picking up patterns, where are they finding them?
I’m beginning to think we need different people working in tech.