Can LLMs tighten up “loose” sentences?

In an earlier post I wrote about how unrevised human prose sometimes contains sentences like these:

  1. In Temple Grandin’s Thinking in Pictures, it discusses how autistic people can be very visual in their thought processes.
  2. From talking with the student’s mother, it seems as though she is very satisfied with the accommodations he receives at school.
  3. For those individuals that are included with their regular education peers, they struggle more with accessing classroom reading materials because they are reading below grade level

What all these sentences have in common is a certain looseness of structure. Topical material that belongs in subject position, right before the verb, is “factored out” into an introductory modifier. In its place are “light” subjects that refer back to this material–here, “it” and “they”.

One can tighten these sentences up by ditching the modifier and moving the content into subject position:

  1. Temple Grandin’s Thinking in Pictures discusses how autistic people can be very visual in their thought processes.
  2. My talk with the student’s mother gave me the impression that she is very satisfied with the accommodations he receives at school.
  3. Those individuals that are included with their regular education peers struggle more with accessing classroom reading materials because they are reading below grade level.

Curious about whether LLMs can make this sort of revision. In ChatGPT 3.5’s case, the answer is yes. Here’s what I got when I asked it to revise each of these sentences:

  1. Temple Grandin explores how autistic individuals often rely on visual thinking in her book Thinking in Pictures.
  2. Talking with the student’s mother suggests she is very satisfied with the accommodations he receives at school.
  3. Those individuals who are included with their regular education peers struggle more to access classroom reading materials due to reading below grade level.

However, each of these revisions introduces a new problem.

The first revision places a revised modifier, “in her book Thinking in Pictures,” at the end of the sentence, the place of greatest emphasis. In this position, the modifier can be interpreted as modifying the embedded clause “autistic individuals rely on visual thinking.” According to this interpretation, it’s specifically in the book Thinking in Pictures that students rely on visual thinking. Whatever that means.

The second revision makes it sound like the activity of talking with the student’s mother is what does the suggesting. And the third revision removes the subject from “because they are reading below grade level” and changes it to the less transparent “due to reading below grade level.”

Interestingly, these are not the sorts of errors I’ve seen ChatGPT make in the texts it generates “from scratch” (i.e., not as an edit of an existing text). The presence of these errors in Chat’s revisions is consistent with what LLMs are optimized for: generating new text; not analyzing existing text.

One thought on “Can LLMs tighten up “loose” sentences?

Leave a comment