Where is AI where we really need it?

With the help of my readers, I’ve found out about several typos in my recent books that went undetected by me, my editors, and my early readers. They also went undetected by Microsoft Word and Grammarly. (I have not found Grammarly helpful for style, but it is useful for catching some typos).

But what about ChatGPT? Surely a technology that can mimic human texts so convincingly that professors are to turning to AI detection tools (to determine whether their students actually wrote their papers themselves) should be able to take an existing paper and detect all its typos.

The issue is that proofreading entails text processing, while text-generation, especially when it comes to AI, does not. For text generation, AI can rely solely on statistics and pattern recognition–specifically, the patterns and frequencies of certain words or structures in gigantic amounts of data it’s been fed. It can then use these patterns and frequencies to generate text that is similar to the texts it was trained on–specifically, those texts that, based on their patterns, are most statistically relevant to the prompt.

No need, here, for AI to actually understand anything. Nor does it appear to, as we see, for example, in Catherine’s recent posts.

Most text processing, in contrast, cannot rely on statistics and pattern recognition (the one apparent exception being the text processing that goes into detecting the likelihood that a paper was generated by AI). Most text processing, that is, involves something more akin to comprehension. To summarize, AI needs to understand the main points; to simplify, it needs to understand well enough to paraphrase; and to detect many common writing errors–e.g., whether “or” should have been “of”, or vice versa (“careful reading of good writing” vs. “careful reading or good writing”)–comprehension is crucial. Comprehension, in turn, involves:

1. Word, sentence, paragraph, and text-level semantics and pragmatics (including subtle forms of negation like “mixed up with,” and irony).

2. Knowledge of how linguistic meaning maps onto real-world phenomena, including sensory experience. For this, we may need ambient robots that can process not just visual information (parsing the world into a 3D model of shapes and colors), but also auditory information (including phonological information), and tactile and chemical information.

Not surprisingly, AI seems nowhere near able to scale what some in the field have called the “barrier of meaning.”

The irony is that proofreading, summarizing, and simplifying are much more useful than text-generating, which probably raises more problems than it solves.

More useful but so much more difficult–at least for AI.

One thought on “Where is AI where we really need it?

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s