Here’s a long-term trend I don’t recall hearing much about: over time, talk has been losing its non-talk context. That is, over time listeners have known less about the context of speaker talk.
Though animals have quite limited languages, they often manage to say what they need to say. And shared context helps with that. For example, each animal may see a lot about where they each are and what they are each doing.
Human language allows for a lot more to be said via our words, and even more via style, such as tone, pacing, etc. But in private talk, we humans still rely heavily on context to make ourselves understood. This context includes not just where we are and what we are doing, but also our histories, relations to each other, and what we have heard about each other via gossip.
Humans also often distinguish between “text” and “subtext”. The literal meanings of our words often differ from, and even contradict, what we say via style and context. As quotable texts are usually designed internally to make subtext deniable, seeing style and context help a lot for inferring subtext.
How much context humans have to interpret words has always varied. For example, when humans addressed larger groups, less could be inferred from their relation to particular audience members. And low context situations have increased over time. For example, with rising population densities, individuals more often talk to relative strangers.
While the introduction of writing has allowed the exchange of letters between friends, writing also allowed a single speaker to address many diverse people across space and time. Furthermore, schools and mass media have greatly encouraged many to spend a lot of time reading such low-context writings. In the last few decades, social media has gone further, encouraged many ordinary people to spend a lot of time writing in a lower-context mode as well. And very recently, large language models trained on big datasets of such public talk have been able to mimic this low-context talk style impressively well.
We humans change our talk in many ways to deal with lower context. For example, speakers add more expressive language, and distinctive talk styles, in order to create stronger packages of listener expectations. And because listeners consistently seek subtext, they more aggressively infer “implicatures” from what speakers say. For example, we feel more free to attribute motives according to speaker demographics, or by “political” associations.
Low context writers seeking to avoid accusations of subtext often use a defensive low-emotional “bureaucratic” or “official” “classic” style. This style admits of no motives other than telling you simply and directly what the writer sees. And as this is style of much of the text on which large language models have been trained, and as the sponsors of such models seek to avoid criticism of their models, these models also tend to admit to no other motives. Also, as classic official talk tends to be “socially desirable”, avoiding cynical appearances, these language models also tend to be reluctant to suspect low motives for human behaviors.
So the spiral of lies spreads deeper...
If your dyslexia makes it difficult for you to write well, and the ChatGPT tool helps you overcome that, that's great, and certainly a valid and useful use case for ChatGPT.
I think ChatGPT is good and useful for many things - my original comment was just pointing out its tendency to say very little of interest, in good English. Driven by a person with a real point to make, I can see how it could be very helpful in making that point clearly.
But what was your original comment to Robin's post trying to say, other than general agreement with it? If there was a point there, it didn't come out clearly to me.