Many people have been working hard for a long time to develop tech that helps to read people’s feelings. They are working on ways to read facial expressions, gazes, word choices, tones of voice, sweat, skin conductance, gait, nervous habits, and many other body features and motions. Over the coming years, we should expect this tech to consistently get cheaper and better at reading more subtler feelings of more people in more kinds of contexts more reliably.
Much of this tech will be involuntary. While your permission and assistance may help such tech to read you better, others will often be able to read you using tech that they control, on their persons or and in the buildings around you. They can use tech integrated with other complex systems that is thus hard to monitor and regulate. Yes, some defenses are possible, such as via wearing dark sunglasses or burqas, and electronically modulating your voice. But such options seem rather awkward and I doubt most people will be willing to use them much in most familiar social situations. And I doubt that regulation will greatly reduce the use of this tech. The overall trend seems clear: our true feelings will become more visible to people around us.
We are often hypocritical about our feelings. That is, we pretend to some degree to have certain acceptable public feelings, while actually harboring different feelings. Most people know that this happens often, but our book The Elephant in the Brain suggests that we still vastly underestimate typical levels of hypocrisy. We all mask our feelings a lot, quite often from ourselves. (See our book for many more details.)
These two facts, better tech for reading feelings and widespread hypocrisy, seem to me to be on a collision course. As a result, within a few decades, we may see something of a “hypocrisy apocalypse”, or “hypocralypse”, wherein familiar ways to manage hypocrisy become no longer feasible, and collide with common norms, rules, and laws. In this post I want to outline some of the problems we face.
Long ago, I was bullied as a child. And so I know rather well that one of the main defenses that children develop to protect themselves against bullies is to learn to mask their feelings. Bullies tend to see kids who are visibly scared or distraught as openly inviting them to bully. Similarly, many adults protect themselves from salespeople and sexual predators by learning to mask their feelings. Masked feelings also helps us avoid conflict with rivals at work and in other social circles. For example, we learn to not visibly insult or disrespect big people in rowdy bars if we don’t want to get beaten up.
Tech that unmasks feelings threatens to weaken the protections that masked feelings provide. That big guy in a rowdy bar may use new tech to see that everyone else there can see that you despise him, and take offense. You bosses might see your disrespect for them, or your skepticism regarding their new initiatives. Your church could see that you aren’t feeling very religious at church service. Your school and nation might see that your pledge of allegiance was not heart-felt. And so on.
While these seem like serious issues, change will be mostly gradual and so we may have time to flexibly search in the space of possible adaptations. We can try changing with whom we meet how for what purposes, and what topics we consider acceptable to discuss where. We can be more selective who we make more visible and how.
I worry more about collisions between better tech for reading feelings and common social norms, rules, and laws. Especially norms and laws that we adopt for more symbolic purposes, instead of to actually manage our interactions. These things tend to be less responsive to changing conditions.
For example, today we often consider it to be unacceptable “sexual harassment” to repeatedly and openly solicit work associates for sex, especially after they’ve clearly rejected the solicitor. We typically disapprove not just of direct requests, but also of less direct but relatively clear invitation reminders, such as visible leers, sexual jokes, and calling attention to your “junk”. And of course such rules make a great deal of sense.
But what happens when tech can make it clearer who is sexually attracted how much to whom? If the behavior that led to these judgements was completely out each person’s control, it might be hard to blame on anyone. We might then socially pretend that it doesn’t exist, though we might eagerly check it out privately. Unfortunately, our behavior will probably continue to modulate the processes that produce such judgements.
For example, the systems that judge how attracted you are to someone might focus on the moments when you directly look at that person, when your face is clearly visible to some camera, under good lighting. Without your wearing sunglasses or a burqa. So the longer you spend directly looking at someone under such conditions, the better the tech will be able to see your attraction. As a result, your choice to spend more time looking directly at them under favorable reading conditions might be seen as an intentional act, a choice to send the message that you are sexually attracted to them. And thus your continuing to do so after they have clearly rejected you might be seen as sexual harassment.
Yes, a reasonable world might adjust rules on sexual harassment to account for many complex changing conditions. But we may not live in a reasonable world. I’m not making any specific claims about sexual harassment rules, but symbolic purposes influence many of the norms and laws that we adopt. That is, we often support such rules not because of the good consequences of having them, but because we like the way that our personal support for such rules makes us look personally. For example, many support laws against drugs and prostitution even when they believe that such laws do little to discourage such things. They want to be personally seen as publicly taking a stand against such behavior.
Consider rules against expressing racism and sexism. And remember that the usual view is that everyone is at least a bit racist and sexist, in part because they live in a racist and sexist society. What happens when we can collect statistics on each person regarding how their visible evaluations of the people around them correlate with the race and sex of those people? Will we then punish white males for displaying statistically-significantly low opinions of non-whites and non-males via their body language? (That’s like a standard we often apply to firms today.) As with sexual harassment, the fact that people can moderate these readings via their behaviors may make these readings seem to count as intentional acts. Especially since they can be tracking the stats themselves, to see the impression they are giving off. To some degree they choose to visibly treat certain people around them with disrespect. And if we are individually eager to show that we personally disapprove of racism and sexism, we may publicly support strict application of such rules even if that doesn’t actually deal well with real problems of racism and sexism in the world.
Remember that this tech should improve gradually. So for the first cases that set key precedents, the tech will be weak and thus flag very few people as clearly harassers or racists or sexists. And those few exceptions are much more likely to be people who actually did intend to harass and express racism or sexism, and who embody extreme versions of such behavior. While they will also probably tend to be people who are weird and non-conformist in other ways, this tech for reading feelings may initially seem to do well to help us identify and deal with problematic people. For example, we may be glad that tech can identity the priests who most clearly lust after the young boys around them.
But as the tech gets better it will slowly be able to flag more and more people as sending disapproved messages. The rate will drift upward from one person in ten thousand to one in a thousand to one percent and so on. People may then start to change their behavior in bigger ways, to avoid being flagged, but that may be too little too late, especially if large video, etc. libraries of old behaviors are available to process with new methods.
At this point we may reach a “hypocralypse”, where rules that punish hypocrisy collide in a big way with tech that can expose hypocrisy. That is, where tech that can involuntarily show our feelings intersects with norms and laws that punish the expression of common but usually hidden feelings. Especially when such rules are in part symbolically motivated.
What happens then, I don’t know. Do white males start wearing burqas, do we regulate this tech heavily, or do we tone down and relax our many symbolic rules? I’ll hope for the best, but I still fear the worst.
People aren't thatt naive - deep down they already know this hypocrisy between our public feelings and private feelings exists, and won't freak out when the hypocrisy is exposed. They will adjust their expectations and move forward. That was my reaction to The Elephant in the Brain, at least. Maybe other people won't adjust as quickly.
Right now some people are more successful at hypocritical doublethink than others. This skill is probably normally distributed. It comes at the price: not being able to peek into one's decision-making algorithm. I guess that makes a hypocritical person less adequate for some jobs (programming comes to mind). The distribution of this skill today roughly reflects how useful it is to be conscious-thinker vs double-thinker.
To know how hypocralypse unfolds one needs to know what this trade-off looks like in the future. We might go full-doublethink world, where anyone who fails to hide signs of selfish motives is exiled. But that is only if there is no significant gains from conscious-thinkers to the society. Aspies seem to be increasingly more important for economy (and got boost in status).
I think it might go either way when it comes to deciding whether we exile conscious-thinkers or get rid hypocrisy from our behaviour. Or somehow it might stay in the middle.