I love this!
About 12 minutes in, “ZDogg” and Marty Makary talk about their problems with AI censorship. ZDogg says he’s had whole pages taken down by Facebook, no reason given; Makary says he’s managed to evade the AI censors thus far, but it’s probably only a matter of time.
Makary says he’s constantly getting people telling him ‘What medicine needs is artificial intelligence.’
Makary also says the UK medical establishment has done a much better job than ours … and that the two risk risk factors they found most predictive of COVID mortality (I think it was mortality per se) were sickle cell anemia and kidney disease. Which means vaccines ought to be delivered to dialysis centers, but no…)
His complaints about medical-journal style sheets are hilarious.
Everyday the junk mail in my inbox outnumbers the junk mail that goes straight to junk. But I figure that that’s the price I (we?) pay for email filters that err on the side of caution. Worse than getting all that email we don’t want to see is missing the email we care about.
One particular sort of email I care about these days are messages that give health updates for a relative of mine who is suffering from an idiopathic cancer (the kind that something like three people in the entire country have). She’s been on an experimental drug, and we’ve all been on tenterhooks wondering whether it will shrink the metastatic tumors that showed up in her lungs last February.
As I mentioned earlier, Twitter, having no legal or financial incentive to involve actual humans, appears to leave its decisions about account suspensions entirely up to its AI system. However, as it turns out, there is one way in which Twitter AI does invite human eyes in on the process.
Here’s how it works:
As I learned in the aftermath of my suspension from Twitter (see below), Twitter leaves its decisions about account suspensions entirely up to its AI system. No human eyes appear to be involved at any stage of the process. If Twitter AI finds a phrase anywhere in a tweet that it associates with a violent threat, it will, automatically and permanently, suspend the “offending” account.
You might think that permanently suspending an account for an infraction that didn’t actually occur might itself be a violation: a violation, that is, of the Terms and Conditions for Twitter use. And surely, you might think, there’s legal recourse.
But Twitter has that covered:
About two weeks ago, my Twitter account was suspended. The message arrived while I was in the middle of messaging one of my cronies about Facilitated Communication. When I clicked “send”, Twitter told me, “You are not permitted to perform this action. Learn more.” When I clicked on “Learn more”, Twitter told me, “Your account is suspended. Learn more.” When I clicked again, Twitter told me that the suspension was for “making violent threats.” “You may not threaten violence against an individual or a group of people,” Twitter explained.
My account is public, visible to my students and colleagues. Even if I were inclined to do so, it would be a really bad idea for me to make threats—let alone violent ones. I wasn’t, and hadn’t—as many of the supporters who subsequently came out in my defense kept telling @Twitter.
So what had triggered this suspension?
Today I had my monthly zoom meeting with my linguistic colleagues, and we found ourselves talking about artificial intelligence. One of my colleagues, Debbie Dahl, recently wrote, along with Christy Doran, an editorial for Speech Technology Magazine entitled “Does your Intelligent Assistant Really Understand you?”
To address this, they put together a list of queries of the sort one might ask of an intelligent assistant like, say, Siri or Alexa.
Here are some examples of what happened.