Author Archives: Robin Hanson

Superhumans Live Among Us

Computers are impressive machines, and they get more impressive every year, as hardware gets cheaper and software gets better. But while they are substantially better than humans on many important tasks, still overall humans earn far more income from using their smarts than do computers. And at past rates of progress it looks like it will take centuries before computers earn more income overall.

The usual explanation for why humans are so much more capable is their flexibility, which probably results mainly from their breadth. A computer doing a task usually has available to it a far smaller range of methods, knowledge, and data. When what it has are good enough, a computer can be far more accurate and cheaper than a human. But when when a computer lacks important relevant method, knowledge, and data, then you just can’t do without that human flexibility and breadth. You might hire a human to work with a computer, but still you need that human on the team.

In our world today, most people are specialists; they spend years learning the methods, knowledge, and data relevant to an existing recognized specialty area. And when your problem falls well within such an existing area, that is exactly the sort of person you want to work on it.

But often we face problems that don’t fall well within existing specialty areas. If we can give a short list of specialty areas that cover our problem, then we can collect a team with members in all those areas. Because talking between people is much less efficient that communication within one person, this team will take a lot longer to solve our problem. But still, eventually such teams are usually up to the task.

However, sometimes we face problems where we don’t know which kinds of expertise are relevant. In such cases what we really need is a person who is expert in far more areas than are most people. Let me call such people “polymaths”, though that word is often used for people who have wide interests but not wide expertise. A polymath with expertise in enough areas has a far better chance of solving broad hard-to-classify problems. A polymath is to an ordinary human as that human is to a computer. At least in terms of relative flexibility and breadth, and thus generality.

Quite often a specialist will see that some of their tools apply to a problem, and not realize that there are tools from other areas that also apply. And if specialists from other areas tell them that other tools do apply, they will usually not have sufficient expertise to directly evaluate that claim. And so the usual human arrogance will often lead them to disagree. Specialists from each area will say that they can help, and discount the possibility of help from other kinds of specialists.

Now a clear long track record showing that teams that include several kinds of specialists tend to solve a certain kind of problem better may convince many specialists that other specialists are relevant. But we often lack such clear long track records. In such cases, we often get stuck in a pattern of having a particular kind of expert deal with a particular kind of problem, even when other kinds of experts could help.

The same thing applies when humans know more than computers. Usually there’s nothing the human could say to prove to the computer that it is missing important relevant tools and knowledge. The computer just doesn’t understand these other tools well enough. So the computer has to just be told to defer to the human when the human thinks it knows better.

Bottom line: superhuman really live among us, whose better abilities compared to us really are analogous to the way we are so much better than computers: they have more flexibility, due to more breadth of expertise. But without clear track records, they usually don’t have ways to convince us to listen to them. Once we’ve found one kind of expert relevant to a problem, those experts tend to tell us that other kinds aren’t needed, and we tend to believe them.

Superhumans walk among us, but don’t get the respect they deserve. We reserve our highest honors for those who are best at specific recognized specialty areas, and mainly only recognize polymaths when they are good enough at one such area.

Added 22Apr: Actually, someone with multiple expertise areas isn’t what I meant if they haven’t worked to integrate them. Compared to computers, the human mind can not only do many things, it has integrated those tools together well. When areas overall, one needs a common representation to accommodate them both. Is one a special case of the other? Do they focus on different parameters in a common parameter space? I mean to refer to a polymath who has successfully integrated their many areas of expertise.

GD Star Rating
Tagged as:

Mormon Transhumanists

A standard trope of science fiction has religious groups using violence to stop a new technology. Perhaps because of this, many are surprised by the existence of religious transhumanists. Saturday I gave a keynote talk on Age of Em at the Mormon Transhumanist Association (MTA) annual conference, and had a chance to study such folks in more detail. And I should say right off the top that this MTA audience, compared to other audiences, had notably fewer morality or religious related objections to my em scenario.

I’m not surprised by the existence of religious tech futurists. Overall, the major world religions have been quite successful in adapting to the many social changes since most of them first appeared many millennia ago. Also, the main predictor of interest in tech futurism and science fiction is an interest in science and technology, and religious folks are not underrepresented there. Even so, you might ask what your favorite theories of religion predict about how MTA folk would differ from other transhumanists.

The most obvious difference I saw is that MTA does community very well, with good organization, little shirking, and lots of polite, respectful, and friendly interaction. This makes sense. Mormons in general have strong community norms, and one of the main functions of religion is to build strong communities. Mormonism is a relatively high commitment religion, and those tend to promote stronger bonds.

Though I did not anticipate it, a predictable consequence of this is that MTA is more of a transhuman take on Mormonism than a Mormon take on transhumanism. On reflection, this reveals an interesting way that long-lived groups with dogmas retain and co-op smart intellectuals. Let me explain.

One standard sales technique is to try to get your mark to spend lots of time considering your product. This is a reason why salespeople often seem so slow and chatty. The more time you spend considering their product, the longer that you will estimate it will take to consider other products, and the more likely you are to quit searching and take their product.

Similarly, religions often expose children to a mass of details, as in religious stories. Smart children can be especially engaged by these details because they like to show off their ability to remember and understand detail. Later on, such people can show off their ability to interpret these details in many ways, and to identify awkward and conflicting elements.

Even if the conflicts they find are so severe as to reasonably call into question the entire thing, by that time such people have invested so much in learning details of their religion that they’d lose a lot of ability to show off if they just left and never talked about it again. Some become vocally against their old religion, which lets them keep talking and showing off about it. But even in opposition, they are still then mostly defined by that religion.

I didn’t meet any MTA who took Mormon claims on miraculous historical events literally. They seemed well informed on science and tech and willing to apply typical engineering and science standards to such things. Even so, MTA folks are so focused on their own Mormon world that they tend to be less interested in asking how Mormons could anticipate and prepare for future changes, and more interested in how future/sci/tech themes could reframe and interpret key Mormon theological debates and claims. In practice their strong desire to remain Mormons in good standing means that they mostly accept practical church authority, including the many ways that the church hides the awkward and conflicting elements of its religions stories and dogma.

For example, MTA folks exploring a “new god argument” seek scenarios wherein we might live in a simulation that resonate with Mormon claims of a universe full of life and gods. While these folks aren’t indifferent to the relative plausibility of hypotheses, this sort of exercise is quite different from just asking what sort of simulations would be most likely if we in fact did live in a simulation.

I’ve said that we today live in an unprecedented dreamtime of unadaptive behavior, a dream from which some will eventually awake. Religious folks in general tend to be better positioned to awake sooner, as they have stronger communities, more self-control, and higher fertility. But even if the trope applies far more in fiction than in reality, it remains possible that Mormon religious orthodoxy could interfere with Mormons adapting to the future.

MTA could help to deal with such problems by becoming trusted guides to the future for other Mormons. To fill that role, they would of course need to show enough interest in Mormon theology to convince the others that they are good Mormons. But they would also need to pay more attention to just studying the future regardless of its relevance to Mormon theology. Look at what is possible, what is likely, and the consequences of various actions. For their sakes, I hope that they can make this adjustment.

By the way, we can talk similarly about libertarians who focus on criticizing government regulation and redistribution. The more one studies the details of government actions, showing off via knowing more such detail, then even if one mostly criticizes such actions, still one’s thinking becomes mostly defined by government. To avoid this outcome, focus more on thinking about what non-government organizations should do and how. It isn’t enough to say “without government, the market will do it.” Become part of a market that does things.

GD Star Rating
Tagged as: , ,

Hail Humans

Humans developed a uniquely strong and flexible capacity for social norms (see Boehm). Because of this, the praise that humans most crave is an acknowledgment that we are principled. That is, that we (mostly) adhere to the norms of our society, even when doing so is costly. And that includes the norm of calling attention to and punishing norm deviators.

In this post, I want to praise most humans for living up to this standard. This isn’t remotely a trivial accomplishment, and it just doesn’t get enough mention. Again, other animals can’t manage it. And most of us are often sorely tempted to defect.

It is much easier to embrace our society’s norms when we feel that we are winning by those norms, or at least breaking even. In this case we can each justify our norm-supporting sacrifices as the price we each pay to get others to make their sacrifices, to create a functioning society.

But much of our innate programming is tuned to watch for markers of relative status, ways in which some us seem better than others. And by this standard most of us are losers, gaining less than average relative status. (In technical terms, the median of success is well below the mean.)

When we feel like we are losers, so that others are gaining much more from society’s norms than we are, it is easier to doubt if we should continue to personally sacrifice to support those norms. Especially when we suspect that winners tend to win in part because they support some norms less than others do.

I think that in most societies, most losers do in fact suspect most winners of insufficient norm support. And there are some who use that as a justification to excuse their norm deviations. And most losers believe that there are many such deviants, and that such deviants tend to gain as a result of their failures to support norms.

And yet, even when they believe that most winners and many others gain from failing to sufficiently support norms, most losers still pay large personal costs to support most norms most of the time. Yes most everyone deviates sometimes, and yes we often work much harder to create the appearance than the substance of norm support. That is, we often attend more to what looks helpful than what is helpful.

Even so, hail to most humans for supporting their society’s norms enough to make possible society, and civilization. Yes, you might think that some societies have a better set of norms than others. And yes we might lament the lack of enough attention to preserving or inventing good norms.

But still, given that it is the praise that humans most crave to hear, and that they in fact do meet the relevant standard, we should give credit where credit is due. Hail to humans for supporting norms. At least their appearance, for most norms, most of the time.

GD Star Rating
Tagged as: , ,

Fuller on Age of Em

I’d heard that an academic review of Age of Em was forthcoming from the new Journal of Posthuman Studies. And after hearing about Baum’s review, the author Steve Fuller of this second academic review (which won’t be published for a few months) gave me permission to quote from it here. First some praise: Continue reading "Fuller on Age of Em" »

GD Star Rating
Tagged as:

Baum on Age of Em

In the Journal Futures, Seth Baum gives the first academic review of Age of Em. First, some words of praise: Continue reading "Baum on Age of Em" »

GD Star Rating
Tagged as: , ,

A Book Response Prediction

All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident. Schopenhauer, 1788-1860.

My next book won’t come out until January, and reviews of it will appear in the weeks and months after that. But now, a year in advance, I want to make a prediction about the main objections that will be voiced. In particular I predict that two of the most common responses will a particular opposing pair.

If you recall, our book is about hidden motives (a.k.a., “X is not about Y):

We’re afraid to acknowledge the extent of our own selfishness. .. The Elephant in the Brain aims to .. blast floodlights into the dark corners of our minds. .. Why do humans laugh? Why are artists sexy? Why do people brag about travel? Why do we so often prefer to speak rather than listen?

Like all psychology books, The Elephant in the Brain examines many quirks of human cognition. But this book also ventures where others fear to tread: into social critique. The authors show how hidden selfish motives lie at the very heart of venerated institutions like Art, Education, Charity, Medicine, Politics, and Religion.

I predict that one of the most common responses will be something like “extraordinary claims require extraordinary evidence.” While the evidence we offer is suggestive, for claims as counterintuitive as ours on topics as important as these, evidence should be held to a higher standard than the one our book meets. We should shut up until we can prove our claims.

I predict that another of the most common responses will be something like “this is all well known.” Wise observers have known and mentioned such things for centuries. Perhaps foolish technocrats who only read in their narrow literatures are ignorant of such things, but our book doesn’t add much to what true scholars and thinkers have long known.

These responses are opposing in the sense that it is hard to find a set of positions from which one could endorse both responses.

I have not phrased this prediction so as to make it very easy to check later if its right. I have also not offered a specific probability. Given the many ambiguities here, this seems right to me.

GD Star Rating
Tagged as:

Reversible Simulations 

Physicist Sabine Hossenfelder is irate that non-physicists use the hypothesis that we live in a computer simulation to intrude on the territory of physicists:

The simulation hypothesis, as it’s called, enjoys a certain popularity among people who like to think of themselves as intellectual, believing it speaks for their mental flexibility. Unfortunately it primarily speaks for their lacking knowledge of physics.

Among physicists, the simulation hypothesis is not popular and that’s for a good reason – we know that it is difficult to find consistent explanations for our observations. After all, finding consistent explanations is what we get paid to do.

Proclaiming that “the programmer did it” doesn’t only not explain anything – it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature. If you try to build the universe from classical bits, you won’t get quantum effects, so forget about this – it doesn’t work. ..

For the purpose of this present post, the details don’t actually matter all that much. What’s more important is that these difficulties of getting the physics right are rarely even mentioned when it comes to the simulation hypothesis. Instead there’s some fog about how the programmer could prevent simulated brains from ever noticing contradictions, for example contradictions between discretization and special relativity.

But how does the programmer notice a simulated mind is about to notice contradictions and how does he or she manage to quickly fix the problem? If the programmer could predict in advance what the brain will investigate next, it would be pointless to run the simulation to begin with. So how does he or she know what are the consistent data to feed the artificial brain with when it decides to probe a specific hypothesis? Where does the data come from? The programmer could presumably get consistent data from their own environment, but then the brain wouldn’t live in a simulation. (more)

Video games today typically only compute visual and auditory details of scenes that players are currently viewing, and then only to a resolution players are capable of noticing. The physics, chemistry, etc. is also made only as consistent and exact as typical players will notice. And most players don’t notice enough to bother them.

What if it were physicists playing a video game? What if they recorded a long video game period from several points of view, and were then able go back and spend years scouring their data carefully? Mightn’t they then be able to prove deviations? Of course, if they tried long and hard enough. And all the more so if the game allowed players to construct many complex measuring devices.

But if the physicists were entirely within a simulation, then all the measuring, recording, and computing devices available to those physicists would be under full control of the simulators. If devices gave measurements showing deviations, the output of those devices could just be directly changed. Or recordings of previous measurements could be changed. Or simulators could change the high level output of computer calculations that study measurements. Or they might perhaps more directly change what the physicists see, remember, or think.

In addition, within a few decades computers in our world will typically use reversible computation (as I discuss in my book), wherein costs are low to reverse previous computations. When simulations are run on reversible computers, it becomes feasible and even cheap to wait until a simulation reveals some problem, and then reverse the simulation back to a earlier point, make some changes, and run the simulation forward again to see it the problem is avoided. And repeat until the problem is in fact avoided.

So those running a simulation containing physicists who could detect deviations from some purported physics of the simulated world could actually wait until some simulated physicist claimed to have detected a deviation. Or even wait until an article based on their claim was accepted for peer review. And then back up the simulation and add more physics detail to try to avoid the problem.

Yes, to implement a strategy like this those running the simulation might have to understand the physics issues as well as did the physicists in the simulation. And they’d have to adjust the cost of computing their simulation to the types of tests that physicists inside examined. In the worse case, if the simulated universe seemed to allow for very large incompressible computations, then if the simulators couldn’t find a way to fudge that by changing high level outputs, they might have to find an excuse to kill off the physicists, to directly change their thoughts, or to end the simulation.

But overall it seems to me that those running a simulation containing physicists have many good options short of ending the simulation. Sabine Hossenfelder goes on to say:

It’s not that I believe it’s impossible to simulate a conscious mind with human-built ‘artificial’ networks – I don’t see why this should not be possible. I think, however, it is much harder than many future-optimists would like us to believe. Whatever the artificial brains will be made of, they won’t be any easier to copy and reproduce than human brains. They’ll be one-of-a-kind. They’ll be individuals.

It therefore seems implausible to me that we will soon be outnumbered by artificial intelligences with cognitive skills exceeding ours. More likely, we will see a future in which rich nations can afford raising one or two artificial consciousnesses and then consult them on questions of importance.

Here I just don’t see what Sabine can be thinking. Today we can quickly make many copies of most any item that we can make in factories from concise designs. Yes, quantum states have a “no-cloning theorem”, but even so if we knew of a good quantum state to start a system in, we should be able to create many such systems that start in that same state. And I know of no serious claim that human minds make important use of unclonable quantum states, or that this would prevent creating many such systems fast.

Yes, biological systems today can be hard to copy fast, because they are so crammed with intricate detail. But as with other organs like bones, hearts, ears, eyes, and skin, most of the complexity in biological brain cells probably isn’t used directly for the function that those cells provide the rest of the body, in this case signal processing. So just as emulations of bones, hearts, ears, eyes, and skin can be much simpler than those organs, a brain emulation should be much simpler than a brain.

Maybe Sabine will explain her reasoning here.

GD Star Rating

Darwin’s Unfinished Symphony

In one kind of book, a smooth talker who has published many books takes a fraction of a year to explore a topic that has newly piqued their curiosity. In another kind of book, someone who has spend a lifetime wrestling with a big subject tries to put it all together into an integrated synthesis. Sometimes they even synthesize an entire research group or tradition. Kevin Laland’s book Darwin’s Unfinished Symphony is this second kind of book, a kind I much prefer.

Laland’s research group has for decades studied the origins of human cultural evolution. They’ve learned a lot. In particular they attribute humanity’s unique ability to accumulate culture over a long time to our very high reliability in transferring practices. Humans achieve such high reliability both by being smart, and by our unusual ability to teach, i.e., changing our behavior to make it easier for others to copy our practices. Just how high a reliability is required is shown by the example of Tasmania, where several thousand isolated humans slowly lost many skills and tools over thousands of years. It seems even human level intelligence and teaching isn’t good enough if your population is only a few thousand.

In both this book and in Henrich’s The Secret of our Success, I detect a tone of conflict between those who emphasize the value of smart brains for evolving culture, and those who emphasize the value of smart brains for managing the complex politics of large social groups. For example, in his book Laland says:

The currently dominant view is that the primate brain expanded to cope with the demands of a rich social life, including the aforementioned Machiavellian skills required to deceive and manipulate others, and the cognitive skills necessary to maintain alliances and trans third-party relationships. The most important data supporting this hypothesis is a positive relationship between measures of group sizes and relative brain size. In our analyses, group size remained as an important predictor of relative brain size, but also proved a significant secondary predictor of primate intelligence and social learning. However, group size was neither the sole, not the most important, predictor of brain size or intelligence in our models. Combined with our earlier find that social group sizes does not predict the performance of primates in laboratory tests of cognition, this reinforced our view that there was more to primate brain evolution than selection for social intelligence. (p.144)

As far as I can remember, all of the cultural learning examples in both the Laland and Henrich books are outside of the domain of Machiavellian social competition. But cultural learning can also be useful there, and so even if the strongest selection pressure on brains was for social competition, that is completely consistent with a strong selection for increasingly reliable abilities to learn and teach. Of course the overall long term increase in humanity’s power and scope is probably less directly due to better social competition skills. But from each creature’s point of view that is mostly a side effect relative to their struggle to survive and reproduce.

GD Star Rating

Imagine Philosopher Kings

I just read Joseph Heath’s Enlightenment 2.0 (reviewed here by Alex). Heath is a philosopher who is a big fan of “reason,” which he sees as an accidentally-created uniquely-human mental capacity offering great gains in generality and accuracy over our other mental capacities. However, reason comes at the costs of being slow and difficult, requiring fragile social and environmental supports, and going against our nature.

Heath sees a recent decline in reliance on reason within our political system, which he blames much more on the right than the left, and he has a few suggestions for improvement. He wants the political process to take longer to consider each choice, to focus more on writing relative to sound and images, and to focus more on longer essays instead of shorter quips. Instead of people just presenting views, he wants more more cross-examination and debate. Media coverage should focus more on experts than on journalists. (Supporting quotes below.)

It seems to me that academic philosopher Heath’s ideal of reason is the style of conversation that academic philosophers now use among themselves, in journals, peer review, and in symposia. Heath basically wishes that political conversations could be more like the academic philosophy conversations of his world. And I expect many others share his wish; there is after all the ancient ideal of the “philosopher king.”

It would be interesting if someone would explore this idea in detail, by trying to imagine just what governance would look like if it were run similar to how academic philosophers now run their seminars, conferences, journals, and departments. For example, imagine requiring a Ph.D. in philosophy to run for political office, and that the only political arguments that one could make in public were long written essays that had passed a slow process of peer review for cogency by professional philosophers. Bills sent to legislatures also require such a peer-reviewed supporting essay. Imagine further incentives to write essays responding to others, rather than just presenting one’s one view. For example, one might have to publish two response essays before being allowed to publish one non-response essay.

Assume that this new peer review process managed to uphold intellectual standards roughly as well as does the typical philosophy subfield journal today. Even then, I don’t have much confidence that this would go well. But I’m not sure, and I’d love to see someone who knows the internal processes of academic philosophy in some detail, and also knows common governance processes in some detail, work out a plausible guess for what a direct combination of these processes would look like. Perhaps in the form of a novel. I think we might learn quite a lot about what exactly can go right and wrong with reason.

Other professions might plausibly also wish that we ran the government more according to the standards that they use internally. It could also be interesting to imagine a government that was run more like how an engineering community is run, or how a community of physicists is run. Or even a community of spiritualists. Such scenarios could be both entertaining and informative.

Those promised quotes from Enlightenment 2.0: Continue reading "Imagine Philosopher Kings" »

GD Star Rating
Tagged as: ,

Better Babblers

You can think of knowing how to write as knowing how to correlate words. Given no words, what first word should you write. Then given one word, what second word best correlates with that. Then given two words, what third word best fits with those two. And so on. Thus your knowledge of how to write can be broken into what you know at these different correlation orders: one word, two words, three words, and so on. Each time you pick a new word you can combine knowledge at these different orders, by weighing all their different recommendations for your next word.

This correlation order approach can also be applied at different scales. For example, given some classification of your first sentence, what kind of second sentence should follow? Given a classification of your first chapter, what kind of second chapter should follow? Many other kinds of knowledge can be similarly broken down into correlation orders, at different scales. We can do this for music, paintings, interior decoration, computer programs, math theorems, and so on.

Given a huge database, such as of writings, it is easy to get good at very low orders; you can just use the correlation frequencies found in your dataset. After that, simple statistical models applied to this database can give you good estimates for correlations to use at somewhat higher orders. And if you have enough data (roughly ten million examples per category I’m told) then recently popular machine learning techniques can improve your estimates at a next set of higher orders.

There are some cases where this is enough; either you can get enormous amounts of data, or learning low order correlations well is enough to solve your problem. These cases include many games with well defined rules, many physical tasks where exact simulations are feasible, and some kinds of language translation. But there are still many other cases where this is far from enough to achieve human level proficiency. In these cases an important part of what we know can be described as very high order correlations produced by “deep” knowledge structures that aren’t well reduced to low order correlations.

After eighteen years of being a professor, I’ve graded many student essays. And while I usually try to teach a deep structure of concepts, what the median student actually learns seems to mostly be a set of low order correlations. They know what words to use, which words tend to go together, which combinations tend to have positive associations, and so on. But if you ask an exam question where the deep structure answer differs from answer you’d guess looking at low order correlations, most students usually give the wrong answer.

Simple correlations also seem sufficient to capture most polite conversation talk, such as the weather is nice, how is your mother’s illness, and damn that other political party. Simple correlations are also most of what I see in inspirational TED talks, and when public intellectuals and talk show guests pontificate on topics they really don’t understand, such as quantum mechanics, consciousness, postmodernism, or the need always for more regulation everywhere. After all, media entertainers don’t need to understand deep structures any better than do their audiences.

Let me call styles of talking (or music, etc.) that rely mostly on low order correlations “babbling”. Babbling isn’t meaningless, but to ignorant audiences it often appears to be based on a deeper understanding than is actually the case. When done well, babbling can be entertaining, comforting, titillating, or exciting. It just isn’t usually a good place to learn deep insight.

As we slowly get better at statistics and machine learning, our machines will slowly get better at babbling. The famous Eliza chatbot went surprisingly far using very low order correlations, and today chatbots best fool us into thinking they are human when they stick to babbling style conversations. So what does a world of better babblers look like?

First, machines will better mimic low quality student essays, so schools will have to try harder to keep such students from using artificial babblers.

Second, the better machines get at babbling, the more humans will try to distinguish themselves from machines via non-babbling conversational styles. So expect less use of simple easy-to-understand-and-predict speech in casual polite conversation, inspirational speeches, and public intellectual talk.

One option is to put a higher premium on talk that actually makes deep sense, in terms of deep concepts that experts understand. That would be nice for those of us who have always emphasized such things. But alas there are other options.

A second option is to put a higher premium on developing very distinctive styles of talking. This would be like how typical popular songs from two centuries ago could be sung and enjoyed by most anyone, compared to how popular music today is matched in great detail to the particular features of particular artists. Imagine most all future speakers having as distinct a personal talking style.

A third option is more indirect, ironic, and insider style talk, such as we tend to see on Twitter today. People using words and phrases and cultural references in ways that only folks very near in cultural space can clearly accept as within recent local fashion. Artificial babblers might not have enough data to track changing fashions in such narrow groups.

Bottom line: the more kinds of conversation styles that simple machines can manage, the more humans will try to avoid talking in those styles, a least when not talking to machines.

GD Star Rating
Tagged as: , ,