25 Comments

This post reminds me of something I read somewhere about psychopaths, that they don't understand the emotional content of normal human language but learn how to use language instrumentally to con people into doing what the psychopath wants--I think this was in Robert Hare's book "Without Conscience". In a sense the psychopath has a shallower understanding of language and in another sense a deeper understanding compared to a normal person. Similarly I wonder if machines could become surprisingly persuasive.

Expand full comment

Yes and no. This is an area I’ve worked on - adding fluctuations to computer-performed music (specifically, note length and volume) to make it sound human-like. Pretty crude randomness of the right kinds are sufficient to fool the ear.

Expand full comment

Stephen Pinker debunked this idea that sentences are a string over words by probability in The Language Instinct.

Expand full comment

+1 for relevancy!

Expand full comment

Good book on postmdoernism?

Expand full comment

I just visited this website today. Very intelligible and interesting topic

Expand full comment

I suspect emotional sympathy is not all that hard to emulate. And this story from today about a chatbot being my friend tends to (anecdotally) support that idea.https://www.technologyrevie...

Providing emotional support is a very strong human signal you are a loyal friend. Someone you can trust is the worst of times. A basic human need.

Hard to be sure, but I'd see emotional pal as a rather likely outcome. If it happened witih ELIZA, certainly it's likely to happen even if you don't quite intend it to. Just human nature and a being that will always listen and pay attention is enough even if you don't deliberately design for it. And if you do design for it...your chatbot provides unconditional emotional support. Plus...on the side it tells you which products to buy. Alexa as therapist combined with Alexa as Amazon salesbot seems like a very powerful economic combination. Not for everyone of course, but maybe very attractive to some segments of the population.

Expand full comment

I would think we have robots for useful things and chatbots would only be used when useful such as an information desk. Maybe some lonely people want someone to talk to and it may make some sense to build to basic information into them. It may even make sense to build one as a way of delivering advertising or as a sex bot, but it is difficult to see a lot use for that, and while novelty will produce some, that would probably quickly wear off. Not being that useful, it won't be pursued very far.

Expand full comment

Yes, that does sound like a better word choice.

Expand full comment

Interesting. So, when will computer programs be able to generate cheap paperback romance novels that rival the "quality" (to speak loosely) of those books currently churned out by today's faceless chorus of 100%-formulaic write-them-to-a-deadline junk fiction authors? Or... has this goal already been accomplished? Note: I recall that some engineers that I knew, quite long ago, were interested in working on a romance-novel automatic-writing program using computers available in the 1970s. They didn't succeed. But... has that long foreseen/expected (even hoped for?) future nearly arrived?

Expand full comment

"Rambling" might be a better word than "babbling". In common usage, it refers to talk that makes sense superficially but doesn't communicate much, which is pretty much what you're talking about. "Babbling" has a strong well-defined meaning; too strong for what you're talking about.

Expand full comment

Yes. The response of painting to the invention of photography is an excellent example. But if language should go the same way as gallery art, we're in trouble. Art today is a joyless, status-signaling potlatch.

Expand full comment

I guess what Robin is indirectly saying is that AI will soon reach the intellect of the 99%. The 99% will then try to differentiate themselves by talking differently, similar to the Texas accent reportedly (Ezra Klein-Malcolm Gladwell podcast) growing in use among native Texans in response to the influx of out-of-staters.

I posit that the most likely outcome is that the use of babble talk for status games or pseudo-work will decline, freeing human brains for greater introspection, observations etc. Definitely a positive development. What will come out is not "more of what we know", but some new and interesting applications of human mind.

Expand full comment

schools will have to try harder to keep such students from using artificial babblers

Or, schools could stop accepting low-quality essays for credit.

The proportion of "graduates" who know little about the field they supposedly studied is very large.

(Separately, attempts to prevent students from using tools they'll have access to in their profession are both Sisyphean and pointless.)

Expand full comment

imitating current snark on twitter doesn't seem that hard. Hence my example of Eugene Goostman (mild) success. But to your larger and original point, if people deliberately speak/type in such a way as to deliberately differ from computer babbling, on second thought think you're likely correct. should see style shift to be harder to imitate by computer babbling. Becoming more insidery/subcultural. Less like a banal Eugene Goostman.

Expand full comment

Or maybe humans talk less in text, or at least stop signalling via babbling in text, as computers become better at reproducing text-based signals. If Twitter was filled with superficially insightful and witty bots I'd expect superficially insighful and witty humans to stop using it to signal how wittily insightful they are. Instead they'd use a platform that banned bots or switch to audio or video or back to strictly local signalling in real life.

Expand full comment