Better Babblers

You can think of knowing how to write as knowing how to correlate words. Given no words, what first word should you write. Then given one word, what second word best correlates with that. Then given two words, what third word best fits with those two. And so on. Thus your knowledge of how to write can be broken into what you know at these different correlation orders: one word, two words, three words, and so on. Each time you pick a new word you can combine knowledge at these different orders, by weighing all their different recommendations for your next word.

This correlation order approach can also be applied at different scales. For example, given some classification of your first sentence, what kind of second sentence should follow? Given a classification of your first chapter, what kind of second chapter should follow? Many other kinds of knowledge can be similarly broken down into correlation orders, at different scales. We can do this for music, paintings, interior decoration, computer programs, math theorems, and so on.

Given a huge database, such as of writings, it is easy to get good at very low orders; you can just use the correlation frequencies found in your dataset. After that, simple statistical models applied to this database can give you good estimates for correlations to use at somewhat higher orders. And if you have enough data (roughly ten million examples per category I’m told) then recently popular machine learning techniques can improve your estimates at a next set of higher orders.

There are some cases where this is enough; either you can get enormous amounts of data, or learning low order correlations well is enough to solve your problem. These cases include many games with well defined rules, many physical tasks where exact simulations are feasible, and some kinds of language translation. But there are still many other cases where this is far from enough to achieve human level proficiency. In these cases an important part of what we know can be described as very high order correlations produced by “deep” knowledge structures that aren’t well reduced to low order correlations.

After eighteen years of being a professor, I’ve graded many student essays. And while I usually try to teach a deep structure of concepts, what the median student actually learns seems to mostly be a set of low order correlations. They know what words to use, which words tend to go together, which combinations tend to have positive associations, and so on. But if you ask an exam question where the deep structure answer differs from answer you’d guess looking at low order correlations, most students usually give the wrong answer.

Simple correlations also seem sufficient to capture most polite conversation talk, such as the weather is nice, how is your mother’s illness, and damn that other political party. Simple correlations are also most of what I see in inspirational TED talks, and when public intellectuals and talk show guests pontificate on topics they really don’t understand, such as quantum mechanics, consciousness, postmodernism, or the need always for more regulation everywhere. After all, media entertainers don’t need to understand deep structures any better than do their audiences.

Let me call styles of talking (or music, etc.) that rely mostly on low order correlations “babbling”. Babbling isn’t meaningless, but to ignorant audiences it often appears to be based on a deeper understanding than is actually the case. When done well, babbling can be entertaining, comforting, titillating, or exciting. It just isn’t usually a good place to learn deep insight.

As we slowly get better at statistics and machine learning, our machines will slowly get better at babbling. The famous Eliza chatbot went surprisingly far using very low order correlations, and today chatbots best fool us into thinking they are human when they stick to babbling style conversations. So what does a world of better babblers look like?

First, machines will better mimic low quality student essays, so schools will have to try harder to keep such students from using artificial babblers.

Second, the better machines get at babbling, the more humans will try to distinguish themselves from machines via non-babbling conversational styles. So expect less use of simple easy-to-understand-and-predict speech in casual polite conversation, inspirational speeches, and public intellectual talk.

One option is to put a higher premium on talk that actually makes deep sense, in terms of deep concepts that experts understand. That would be nice for those of us who have always emphasized such things. But alas there are other options.

A second option is to put a higher premium on developing very distinctive styles of talking. This would be like how typical popular songs from two centuries ago could be sung and enjoyed by most anyone, compared to how popular music today is matched in great detail to the particular features of particular artists. Imagine most all future speakers having as distinct a personal talking style.

A third option is more indirect, ironic, and insider style talk, such as we tend to see on Twitter today. People using words and phrases and cultural references in ways that only folks very near in cultural space can clearly accept as within recent local fashion. Artificial babblers might not have enough data to track changing fashions in such narrow groups.

Bottom line: the more kinds of conversation styles that simple machines can manage, the more humans will try to avoid talking in those styles, a least when not talking to machines.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    The better machines get at babbling, the more humans will try to distinguish themselves from machines via non-babbling conversational styles.

    Is this premise necessarily true? Has the ability of computers to play chess made humans less interested in it? Sometimes, it is a compliment to be told that we functioned (reliably) like a machine.

    • Daniel Filan

      Humans may be interested in playing chess, but I bet that in casual play it’s considered impolite to pretend to think of moves by yourself but actually use a computer, and it’s against the rules in serious/tournament play.

    • http://praxtime.com/ Nathan Taylor (praxtime)

      I think a better analogy than chess (which can only be played according to a strict set of rules) is art. And here we see that the camera did in fact do exactly this. Artists responded to photography with impressionism. Which is great by the way! But ultimately I think the effort to go beyond cameras (machines) finally went overboard into silliness.

      • Faze

        Yes. The response of painting to the invention of photography is an excellent example. But if language should go the same way as gallery art, we’re in trouble. Art today is a joyless, status-signaling potlatch.

  • http://praxtime.com/ Nathan Taylor (praxtime)

    This is an excellent post. The reaction to sounding like a babbler will be interesting, and possibly quite a big deal. The biggest reaction I can think of will be to further enhance the split we already have in america between meritocracy winners and everyone else. The ego trip in calling the “back row kids” chatbots will be impossible to resist for the “front row kids”. This is a coming apart argument of course. It is the perfect fuel to fire up the existing splits in america between elite cities and everyone else.

    Seperate note. Re this point: “A third option is more indirect, ironic, and insider style talk, such as we tend to see on Twitter today.” It strikes me it might be easier to imitate a snarky ideologue on twitter than a normal human being. Recall the chatbot Eugene Goostman which passed the turing test by being a 13 year old snark master. I guess we’ll find out if this helps or hurts twitter trolling. Not sure which way it will go.
    http://www.zdnet.com/article/computer-chatbot-eugene-goostman-passes-the-turing-test/

    • http://overcomingbias.com RobinHanson

      Imitating generic snark might be easy, but imitating the particular snark used lately by your specific subculture could be hard.

      • http://praxtime.com/ Nathan Taylor (praxtime)

        imitating current snark on twitter doesn’t seem that hard. Hence my example of Eugene Goostman (mild) success. But to your larger and original point, if people deliberately speak/type in such a way as to deliberately differ from computer babbling, on second thought think you’re likely correct. should see style shift to be harder to imitate by computer babbling. Becoming more insidery/subcultural. Less like a banal Eugene Goostman.

    • http://don.geddis.org/ Don Geddis

      Just to be clear, Turing’s original proposal ( https://en.wikipedia.org/wiki/Computing_Machinery_and_Intelligence ) was far more difficult for computers than the simple Turing-inspired contest that you cite. The Goostman chatbot did not pass “the Turing test”. Instead — not much different from Eliza many decades ago — it fooled a few humans into thinking that it might be human too. That much simpler problem was NOT Turing’s proposal.

      • http://praxtime.com/ Nathan Taylor (praxtime)

        agree. should have said “did well on a turing-style test.”

  • http://invariant.org/ Peter Gerdes

    Another possibility (or subpossibility of distinct speaking) is that people will deliberately introduce errors into their speech that aren’t made by the machines. For instance, this is what occurred in DJing after computers improved to the point of being able to perfectly match beats.

    • http://overcomingbias.com RobinHanson

      Presumably it is the subtle pattern of errors that is key. It is very easy for computers to introduce errors.

  • Jason Young

    Or maybe humans talk less in text, or at least stop signalling via babbling in text, as computers become better at reproducing text-based signals. If Twitter was filled with superficially insightful and witty bots I’d expect superficially insighful and witty humans to stop using it to signal how wittily insightful they are. Instead they’d use a platform that banned bots or switch to audio or video or back to strictly local signalling in real life.

  • Dave Lindbergh

    schools will have to try harder to keep such students from using artificial babblers

    Or, schools could stop accepting low-quality essays for credit.

    The proportion of “graduates” who know little about the field they supposedly studied is very large.

    (Separately, attempts to prevent students from using tools they’ll have access to in their profession are both Sisyphean and pointless.)

  • One of the dudes

    I guess what Robin is indirectly saying is that AI will soon reach the intellect of the 99%. The 99% will then try to differentiate themselves by talking differently, similar to the Texas accent reportedly (Ezra Klein-Malcolm Gladwell podcast) growing in use among native Texans in response to the influx of out-of-staters.

    I posit that the most likely outcome is that the use of babble talk for status games or pseudo-work will decline, freeing human brains for greater introspection, observations etc. Definitely a positive development. What will come out is not “more of what we know”, but some new and interesting applications of human mind.

  • Curt Adams

    “Rambling” might be a better word than “babbling”. In common usage, it refers to talk that makes sense superficially but doesn’t communicate much, which is pretty much what you’re talking about. “Babbling” has a strong well-defined meaning; too strong for what you’re talking about.

    • http://overcomingbias.com RobinHanson

      Yes, that does sound like a better word choice.

  • Robert Koslover

    Interesting. So, when will computer programs be able to generate cheap paperback romance novels that rival the “quality” (to speak loosely) of those books currently churned out by today’s faceless chorus of 100%-formulaic write-them-to-a-deadline junk fiction authors? Or… has this goal already been accomplished? Note: I recall that some engineers that I knew, quite long ago, were interested in working on a romance-novel automatic-writing program using computers available in the 1970s. They didn’t succeed. But… has that long foreseen/expected (even hoped for?) future nearly arrived?

  • Lord

    I would think we have robots for useful things and chatbots would only be used when useful such as an information desk. Maybe some lonely people want someone to talk to and it may make some sense to build to basic information into them. It may even make sense to build one as a way of delivering advertising or as a sex bot, but it is difficult to see a lot use for that, and while novelty will produce some, that would probably quickly wear off. Not being that useful, it won’t be pursued very far.

    • http://praxtime.com/ Nathan Taylor (praxtime)

      I suspect emotional sympathy is not all that hard to emulate. And this story from today about a chatbot being my friend tends to (anecdotally) support that idea.
      https://www.technologyreview.com/s/603936/three-weeks-with-a-chatbot-and-ive-made-a-new-friend/

      Providing emotional support is a very strong human signal you are a loyal friend. Someone you can trust is the worst of times. A basic human need.

      Hard to be sure, but I’d see emotional pal as a rather likely outcome. If it happened witih ELIZA, certainly it’s likely to happen even if you don’t quite intend it to. Just human nature and a being that will always listen and pay attention is enough even if you don’t deliberately design for it. And if you do design for it…your chatbot provides unconditional emotional support. Plus…on the side it tells you which products to buy. Alexa as therapist combined with Alexa as Amazon salesbot seems like a very powerful economic combination. Not for everyone of course, but maybe very attractive to some segments of the population.

  • Daniel Ngenegbo

    I just visited this website today. Very intelligible and interesting topic

    • anon

      +1 for relevancy!

  • Ari T

    Good book on postmdoernism?

  • Pingback: Words and deeds | Compass Rose

  • m_knny

    This post reminds me of something I read somewhere about psychopaths, that they don’t understand the emotional content of normal human language but learn how to use language instrumentally to con people into doing what the psychopath wants–I think this was in Robert Hare’s book “Without Conscience”. In a sense the psychopath has a shallower understanding of language and in another sense a deeper understanding compared to a normal person. Similarly I wonder if machines could become surprisingly persuasive.