The Master and His Emissary

I had many reasons to want to read Iain McGilchrist’s 2009 book The Master and His Emissary.

  1. Its an ambitious big-picture book, by a smart knowledgeable polymath. I love that sort of book.
  2. I’ve been meaning to learn more about brain structure, and this book talks a lot about that.
  3. I’ve been wanting to read more literary-based critics of economics, and of sci/tech more generally.
  4. I’m interested in critiques of civilization suggesting that people were better off in less modern worlds.

This video gives an easy to watch book summary:

McGilchrist has many strong opinions on what is good and bad in the world, and on where civilization has gone wrong in history. What he mainly does in his book is to organize these opinions around a core distinction: the left vs right split in our brains. In sum: while we need both left and right brain style thinking, civilization today has gone way too far in emphasizing left styles, and that’s the main thing that’s wrong with the world today.

McGilchrist maps this core left-right brain distinction onto many dozens of other distinctions, and in each case he says we need more of the right version and less of the left. He doesn’t really argue much for why right versions are better (on the margin); he mostly sees that as obvious. So what his book mainly does is help people who agree with his values organize their thinking around a single key idea: right brains are better than left.

Here is McGilchrist’s key concept of what distinguishes left from right brain reasoning: Continue reading "The Master and His Emissary" »

GD Star Rating
loading...
Tagged as: , ,

My Poll, Explained

So many have continued to ask me the same questions about my recent twitter poll, that I thought I’d try to put all my answers in one place. This topic isn’t that fundamentally interesting, so most you you may want to skip this post.

Recently, Christine Blasey Ford publicly accused US Supreme Court nominee Brett Kavanaugh of a sexual assault. This accusation will have important political consequences, however it is resolved. Congress and the US public are now put in the position of having to evaluate the believability of this accusation, and thus must consider which clues might indicate if the accusation is correct or incorrect.

Immediately after the accusation, many said that the timing of the accusation seemed to them suspicious, occurring exactly when it would most benefit Democrats seeking to derail any nomination until after the election, when they may control the Senate. And it occurred to me that a Bayesian analysis might illuminate this issue. If T = the actual timing, A = accurate accusation, W = wrong accusation, then how much this timing consideration pushes us toward final beliefs is given by the likelihood ratio p(T|W)/p(T|A). A ratio above one pushes against believing the accusation, while a ratio below one pushes for it.

The term P(T|A) seemed to me the most interesting term, and it occurred to me to ask what people thought about it via a Twitter poll. (If there was continued interest, I could ask another question about the other term.) Twitter polls are much cheaper and easier for me to do than other polls. I’ve done dozens of them so far, and rarely has anyone objected. Such polls only allow four options, and you don’t have many characters to explain your question. So I used those characters mainly to make clear a few key aspects of the accusation’s timing:

Many claimed that my wording was misleading because it didn’t include other relevant info that might support the accusation. Like who else the accuser is said to have told when, and what pressures she is said to have faced when to go public. They didn’t complain about my not including info that might lean the other way, such as low detail on the claimed event and a lack of supporting witnesses. But a short tweet just can’t include much relevant info; I barely had enough characters to explain key accusation timing facts.

It is certainly possible that my respondents suffered from cognitive biases, such as assuming too direct a path between accuser feelings and a final accusation. To answer my poll question well, they should have considered many possible complex paths by which an accuser says something to others, who then tell others people, some of which then chose when to bring pressure back on that accuser to make a public accusation. But that’s just the nature of any poll; respondents may well not think carefully enough before answering.

For the purposes of a Twitter poll, I needed to divide the range from 0% to 100% into four bins.
I had high uncertainty about where poll answers would lie, and for the purpose of Bayes rule it is factors that matter most. So I choose three ranges of roughly a factor of 4 to 5, and a leftover bin encompassing an infinite factor. If anything, my choice was biased against answers in the infinite factor bin.

I really didn’t know which way poll answers would go. If most answers were high fractions, that would tend to support the accusation, while if most answers were low fractions, that would tend to question the accusation. Many accused me of posting the poll in order to deny the accusation, but for that to work I would have needed a good guess on the poll answers. Which I didn’t have.

My personal estimate would be somewhere in the top two ranges, and that plausibly biased me to pick bins toward such estimates.  As two-thirds of my poll answers were in the lowest bin I offered, that suggests that I should have offered an even wider range of factors. Some claimed that I biased the results by not putting more bins above 20%. But that fraction is still below the usual four-bin target fraction of 25% per bin.

It is certainly plausible that my pool of poll respondents are not representative of the larger US or world population. And many called it is irresponsible and unscientific to run an unrepresentative poll, especially if one doesn’t carefully show which wordings matter how via A/B testing. But few complain about the thousands of other Twitter polls run every day, or of my dozens of others. And the obvious easy way to show that my pool or wordings matter is to show different answers with another poll where those vary. Yet almost no one even tried that.

Also, people don’t complain about others asking questions in simple public conversations, even though those can be seen as N=1 examples of unrepresentative polls without A/B testing on wordings. It is hard to see how asking thousands of people the same question via a Twitter poll is less informative than just asking one person that same question.

Many people said it is just rude to ask a poll question that insinuates that rape accusations might be wrong, especially when we’ve just seen someone going through all the pain of making one. They say that doing so is pro-rape and discourages the reporting of real rapes, and that this must have been my goal in making this poll. But consider an analogy with discussing gun control just after a shooting. Some say this is rude then to discuss anything but sympathy for victims, but others say this is exactly a good time to discuss gun control. I say that when we must evaluate a specific rape accusation is exactly a good time to think about what clues might indicate in what direction on whether this is an accurate or wrong accusation.

Others say that it is reasonable to conclude that I’m against their side if I didn’t explicitly signal within my poll text  that I’m on their side. That’s just the sort of signaling game equilibrium we are in. And so they are justified in denouncing me for being on the wrong side. But it seems a quite burdensome standard to hold on polls, which already have too few characters to allow an adequate explanation of a question, and it seems obvious that the vast majority of Twitter polls today are not in fact being held to this standard.

Added 24Sep: I thought the poll interesting enough to ask, relative to its costs to me, but I didn’t intend to give it much weight. It was all the negative comments that made it a bigger deal.

Note that, at least in my Twitter world, we see a big difference in attitudes between vocal folks who tweet and those who merely answer polls. That later “silent majority” is more skeptical of the accusation.

GD Star Rating
loading...
Tagged as: , , ,

Allow Covert Eye-Rolls

Authorities, such as parents, teachers, bosses, and police, tend to have both dominance and prestige. Their dominance is usually clear: they can hit you, fire you, or send you to your room. Their prestige tends to be less clear, as that is an informal social consensus on their relevant ability and legitimacy. They have to earn prestige in the eyes of subordinates, and subordinates talk with each other to form a consensus on that. I’ve suggested that we often choose bosses primarily for their prestige indicators, as that allows subordinates to more easily submit to dominance without shame.

There’s a classic scene in fiction where an authority goes too far to squash defiance. (E.g., see video above.) Yes, authorities must respond to overt defiance that interferes with key functions, like a child refusing to come home or a student refusing to stop disrupting class. But usually authorities prefer to suggest actions, rather than to give direct orders. And often subordinates try to use covert signals to tell each other they are less than fully impressed by authority. They might roll their eyes, smirk, slouch, let their attention wander, etc. And sometimes authorities take visible offense at such signs, punishing offenders severely. In extreme cases they may demand not only that everyone seem to be enthusiastically positive in public, they may also plant spies and monitor private talk to punish anyone who says anything remotely negative in private.

This is the scenario of extreme totalitarian dominance, a picture that groups often try to paint about opponents. It is the rationale in the ancient world for why we have good kings but they have evil tyrants, and why we’d be doing them a favor to replace their leaders with ours. More recently, it is the story that the west told on Nazism and Communism. It is even the typical depiction today of historical slavery; it isn’t enough to describe slaves as poor, over-worked, and with few freedoms, they are also shown as also having mean tyrannical owners.

The key problem for authorities is that repressing dissent has the direct effect of discouraging rebellion, but the indirect effect of looking bad. It looks weak to try to stop subordinates from talking frankly about the prestige they think you deserve. Doing this suggests that you don’t think they will estimate your prestige highly. Much better to present the image that most everyone accepts your authority due to your high prestige, and it is only a few malcontent troublemakers who defy you. So most authorities allow subordinate eye-rolls, smirks, negative gossip, etc. as long as they are not too overtly a direct commonly-visible challenge to their authority. They visibly repress overt defiance by one low prestige person or small group, but are wary of simply crushing large respected groups, or hindering their covert gossip. Trying that makes you seem insecure and weak.

In the world of cultural elites today, like arts, journalism, civil service, law, and academia, there’s a dominant culture, and it punishes deviations from its core tenets. But its supporters should be worried about going too far toward totalitarian dominance. They should want to project the image that they don’t need to repress dissent much, as their culture is so obviously prestigious. If the good people are pretty unified in their respect for it, it should be sufficient to punish those who most openly and directly defy it. They shouldn’t seem to feel much threatened by others rolling their eyes.

It is in this context that I think we should worry about the recent obsession with gaslighting and dog-whistles. I’ve posted some controversial tweets recently, and in response others have then publicly attributed to me extreme and culturally-defiant views. (Such as I’m sexist, pro-rapeanti-reporting-of-rape, and seem likely to rape.) When I’ve pointed out that I’ve said no such things and often said the opposite, they often respond with dog-whistle concerns.

That is, they say that there are all these people out there who pretend to submit to culturally dominant views, but who actually harbor sympathy with opposing views. They hide in the shadows communicating with each other covertly, using anonymous internet accounts and secret hand signals. It is so important to crush these rebels that we can’t afford to give anyone the benefit of the doubt to only criticize them for the views they actually say. We must aggressively punish people for even seeming to some people like they might be the sort to secret harbor rebel sympathies. And once everyone knows that we are in a strong repression regime, there’s no excuse for not lying low in abject submission, avoiding any possible hint of forbidden views. If you even touch such topics, you only have yourself to blame for what happens to you.

I hope you can see the problem. Worlds of strong repression are not secure stable worlds. Since everyone knows that authorities are making it hard for others to share opinions on authority prestige, they presume low levels of prestige. So if there’s ever an opening for a rebellion, they expect to see that rebellion. If the boot ever lets up just a bit in stomping the face, it may never get a second change.

Let us instead revert back to the traditional intellectual standard: respond most to what people say, and don’t stretch too hard to infer what you think they mean in scattered hints of what they’ve said and done. Let them roll their eyes and feel each other out for how much they respect the dominant authorities, be that people or culture. As they say:

If you love something set it free. If it comes back it’s yours. If not, it was never meant to be.

GD Star Rating
loading...
Tagged as:

Great Filter, 20 Years On

Twenty years ago today, I introduced the phrase “The Great Filter” in an essay on my personal website. Today Google says 300,000 web pages use this phrase, and 4.3% of those mention my name. This essay has 45 academic citations, and my related math paper has 17 cites.

These citations are a bit over 1% of my total citations, but this phrase accounts for 5% of my press coverage. This press is mostly dumb luck. I happened to coin a phrase on a topic of growing and wide interest, yet others more prestigious than I didn’t (as they often do) bother to replace it with another phrase that would trace back to them.

I have mixed feelings about writing the paper. Back then I was defying the usual academic rule to focus narrowly. I was right that it is possible to contribute to many more different areas than most academics do. But what I didn’t fully realize is that to academic economists non-econ publications don’t exist, and that publication is only the first step to academic influence. If you aren’t around in an area to keep publishing, giving talks, going to meetings, doing referee reports, etc., academics tend to correctly decide that you are politically powerless and thus you and your work can safely be ignored.

So I’m mostly ignored by the academics who’ve continued in this area – don’t get grants, students, or invitations to give talks, to comment on paper drafts, or to referee papers, grants, books, etc. The only time I’ve ever been invited to talk on the subject was a TEDx talk a few years ago. (And I’ve given over 350 talks in my career.) But the worst scenario of being ignored is that it is as if your paper never existed, and so you shouldn’t have bothered writing it. Thankfully I have avoided that outcome, as some of my insights have been taken to heart, both academically and socially. People now accept that finding independent alien life simpler than us would be bad news, that the very hard filter steps should be roughly equally spaced in our history, and that the great filter gives a reason to worry about humanity’s future prospects.

GD Star Rating
loading...
Tagged as: , ,

News Accuracy Bonds

Fake news is a type of yellow journalism or propaganda that consists of deliberate misinformation or hoaxes spread via traditional print and broadcast news media or online social media. This false information is mainly distributed by social media, but is periodically circulated through mainstream media. Fake news is written and published with the intent to mislead in order to damage an agency, entity, or person, and/or gain financially or politically, often using sensationalist, dishonest, or outright fabricated headlines to increase readership, online sharing, and Internet click revenue. (more)

One problem with news is that sometimes readers who want truth instead read (or watch) and believe news that is provably false. That is, a news article may contain claims that others are capable of proving wrong to a sufficiently expert and attentive neutral judge, and some readers may be fooled against their wishes into believing such news.

Yes, news can have other problems. For example, there can be readers who don’t care much about truth, and who promote false news and its apparent implications. Or readers who do care about truth may be persuaded by writing whose mistakes are too abstract or subtle to prove wrong now to a judge. I’ve suggested prediction markets as a partial solution to this; such markets could promote accurate consensus estimates on many topics which are subtle today, but which will eventually become sufficiently clear.

In this post, however, I want to describe what seems to me the simple obvious solution to the more basic problem of truth-seekers believing provably-false news: bonds. Those who publish or credential an article could offer bonds payable to anyone who shows their article to be false. The larger the bond, the higher their declared confidence in their article. With standard icons for standard categories of such bonds, readers could easily note the confidence associated with each news article, and choose their reading and skepticism accordingly.

That’s the basic idea; the rest of this post will try to work out the details.

While articles backed by larger bonds should be more accurate on average, the correlation would not be exact. Statistical models built on the dataset of bonded articles, some of which eventually pay bonds, could give useful rough estimates of accuracy. To get more precise estimates of the chance that an article will be shown to be in error, one could create prediction markets on the chance that an individual article will pay a bond, with initial prices set at statistical model estimates.

Of course the same article should have a higher chance of paying a bond when its bond amount is larger. So even better estimates of article accuracy would come from prediction markets on the chance of paying a bond, conditional on a large bond amount being randomly set for that article (for example) a week after it is published. Such conditional estimates could be informative even if only one article in a thousand is chosen for such a very large bond. However, since there are now legal barriers to introducing prediction markets, and none to introducing simple bonds, I return to focusing on simple bonds.

Independent judging organizations would be needed to evaluate claims of error. A limited set of such judging organizations might be certified to qualify an article for any given news bond icon. Someone who claimed that a bonded article was in error would have to submit their evidence, and be paid the bond only after a valid judging organization endorsed their claim.

Bond amounts should be held in escrow or guaranteed in some other way. News firms could limit their risk by buying insurance, or by limiting how many bonds they’d pay on all their articles in a given time period. Say no more than two bonds paid on each day’s news. Another option is to have the bond amount offered be a function of the (posted) number of readers of an article.

As a news article isn’t all true or false, one could distinguish degrees of error. A simple approach could go sentence by sentence. For example, a bond might pay according to some function of the number of sentences (or maybe sentence clauses) in an article shown to be false. Alternatively, sentence level errors might be combined to produce categories of overall article error, with bonds paying different amounts to those who prove each different category. One might excuse editorial sentences that do not intend to make verifiable newsy claims, and distinguish background claims from claims central to the original news of the article. One could also distinguish degrees of error, and pay proportional to that degree. For example, a quote that is completely made up might be rated as completely false, while a quote that is modified in a way that leaves the meaning mostly the same might count as a small fractional error.

To the extent that it is possible to verify partisan slants across large sets of articles, for example in how people or organizations are labeled, publishers might also offer bonds payable to those than can show that a publisher has taken a consistent partisan slant.

A subtle problem is: who pays the cost to judge a claim? On the one hand, judges can’t just offer to evaluate all claims presented to them for free. But on the other hand, we don’t want to let big judging fees stop people from claiming errors when errors exist. To make a reasonable tradeoff, I suggest a system wherein claim submissions include a fee to pay for judging, a fee that is refunded double if that claim is verified.

That is, each bond specifies a maximum amount it will pay to judge that bond, and which judging organizations it will accept.  Each judging organization specifies a max cost to judge claims of various types. A bond is void if no acceptable judge’s max is below that bond’s max. Each submission asking to be paid a bond then submits this max judging fee. If the judges don’t spend all of their max judging fee evaluating this case, the remainder is refunded to the submission. It is the amount of the fee that the judges actually spend that will be refunded double if the claim is supported. A public dataset of past bonds and their actual judging fees could help everyone to estimate future fees.

Those are the main subtleties that I’ve considered. While there are ways to set up such a system better or worse, the basic idea seems robust: news publishers who post bonds payable if their news is shown to be wrong thereby credential their news as more accurate. This can allow readers to more easily avoid believing provably-false news.

A system like that I’ve just proposed has long been feasible; why hasn’t it been adopted already? One possible theory is that publishers don’t offer bonds because that would remind readers of typical high error rates:

The largest accuracy study of U.S. papers was published in 2007 and found one of the highest error rates on record — just over 59% of articles contained some type of error, according to sources. Charnley’s first study [70 years ago] found a rate of roughly 50%. (more)

If bonds paid mostly for small errors, then bond amounts per error would have to be very small, and calling reader attention to a bond system would mostly remind them of high error rates, and discourage them from consuming news.

However, it seems to me that it should be possible to aggregate individual article errors into measures of overall article error, and to focus bond payouts on the most mistaken “fake news” type articles. That is, news error bonds should mostly pay out on articles that are wrong overall, or at least quite misleading regarding their core claims. Yes, a bit more judgment might be required to set up a system that can do this. But it seems to me that doing so is well within our capabilities.

A second possible theory to explain the lack of such a system today is the usual idea that innovation is hard and takes time. Maybe no one ever tried this with sufficient effort, persistence, or coordination across news firms. So maybe it will finally take some folks who try this hard, long, and wide enough to make it work. Maybe, and I’m willing to work with innovation attempts based on this second theory.

But we should also keep a third theory in mind: that most news consumers just don’t care much for accuracy. As we discuss in our book The Elephant in the Brain, the main function of news in our lives may be to offer “topics in fashion” that we each can all riff on in our local conversations, to show off our mental backpacks of tools and resources. For that purpose, it doesn’t much matter how accurate is such news. In fact, it might be easier to show off with more fake news in the mix, as we can then show off by commenting on which news is fake. In this case, news bonds would be another example of an innovation designed to give us more of what we say we want, which is not adopted because we at some level know that we have hidden motives and actually want something else.

GD Star Rating
loading...
Tagged as: , , ,

Sexism Inflation

What counts as “sexism” seems to be slowly inflating. You may recall that in 2005, Larry Summers lost his job as Harvard president for suggesting that genetically-caused ability differences contribute to women doing less well than men in science. In 2017, James Damore lost his job as a Google engineer for suggesting that Google has fewer female engineers in part because women tend to have different preferences, being more artistic, social, neurotic, and tending to prefer people to things.

Now a recent article describes the sad story of a math paper on why males are more variable on many traits in many species. Its key idea is that variances is rewarded when less than half of candidates are selected, while variance is punished when more than half are selected. This paper was accepted for publication by a math journal, and then it was unaccepted. Then this happened again at another math journal. No one claimed there was anything technically wrong with the paper, but they did claim that it was “damaging to the aspirations of impressionable young [math] women”, that “right-wing media may pick this up”, that it “support[s] a very controversial, and potentially sexist, set of ideas”.

So first it was sexist to suggest human women have lower science ability, then sexist to suggest women have differing tech-job preferences, and now it is sexist to say that in general across species and traits males tend to have more variance because they are selected less often.

My job seems safe and I haven’t had any publications unaccepted. But I can report on a different kind of inflation regarding what counts as (something close to) sexism: it seems now not okay to presume that male-female differences that were common in the past may continue on into the future, unless you explicitly say that such differences are due to evil discrimination and claim that the future will be full of evil discriminators.

My evidence? In my book Age of Em, I guess that past male-female differences may continue on into the future. This includes differences in what each sex desires in the other sex, and differences in employer demand for each sex in differing circumstances.

On differing desires, Sarah O’Connor said in a Financial Times review back in 2016:

One also has to believe that current economic and social theories will hold in this strange new world; that the “unknown unknowns” are not so great as to make any predictions impossible. Certainly, some of the forecasts seem old-fashioned, like the notion that male ems will prefer females with “signs of nurturing inclinations and fertility, such as youthful good looks” while females will prefer males with “signs of wealth and status”. Even so, the journey is thought-provoking. (more)

On differing labor demand, Philip Ball said in Aeon just this last week:

He also betrays a rather curious attitude to the arrow of historical causation when he notes in The Age of Em that male ems might be in higher demand than female ems, because of ‘the tendency of top performers in most fields today to be men’. (more)

Here is the entire relevant section from my book on labor demand by sex:

The em economy may have an unequal demand for the work of males and females. Although it is hard to predict which gender will be more in demand in the em world, one gender might end up supplying proportionally more workers than the other. On the one hand, the tendency of top performers in most fields today to be men suggests there might be more demand for male ems. However, while today’s best workers are often motivated by the attention and status that being the best can bring, in the em world there are millions of copies of the best workers, who need to find other motivations for their work. On the other hand, today women are becoming better edu- cated and are in increasing demand in modern workplaces. There are some indications that women have historically worked harder and more persistently in hard-times low-status situations, which seem similar in some ways to the em world. (p.338-9)

So I consider both possibilities, higher male and female labor demand, and for each possibility I note a sex-difference pattern from the past suggesting that possibility. Why does that suggest a “curious attitude to the arrow of historical causation”? Emailing the author, I was told that “a blank statement of male predominance today could easily be misinterpreted as an acceptance of something natural and inevitable in it” and “To say that [men] have been the ‘top performers’ implies that they achieve better on a level playing field.” And also “such differences … [might] arise because of a choice to perpetuate the inequalities we have seen historically. And one certainly can’t dismiss that possibility. But you do not say that.”

So it seems that today, to avoid (something close to) the label “sexist” (or “old-fashioned” on sex differences), it is not enough that you avoid explaining past observed sex differences in behavior in general in terms of sex differences in any selected parameters, including abilities or preferences, and including their means or variances. One must also presume that such differences will not continue into the future, unless one explicitly claims they will be caused by continued unfair discrimination. Regarding sex differences, predicting the future by guessing that it may be similar to the past is presumed sexist.

GD Star Rating
loading...
Tagged as:

Commitments Explain Gaps

Consider trying to predict the details of unattached people’s kisses. That is, you might have data on who such people have actually kissed when, where, and how, and data on who they say they would be willing to kiss under what circumstances. From such data you make models that predict both the kisses that actually happen and the kisses they say they are willing to join. For example, you may notice that they kiss more when they are awake, are not busy with other activities, and feeling frisky. They kiss more when they and their partner are clean and well groomed. They kiss more when they are more attractive to others, and when other willing partners are more attractive to them according to their preferences.

Now consider doing the same exercise for people who are married. When you fit this sort of data, you will find one new big factor: they almost always kiss only their spouse. And if you try to explain both these datasets in the same terms, you’d have to say spouses are in some strange way vastly more attracted to each other than they are to everyone else. This attraction is strange because it isn’t explained by other measurable features you can see, and no one else seems to feel this extra attraction.

Of course the obvious explanation here is that married people typically make a commitment to kiss only each other. Yes there is a sense in which they are attracted more to each other than to other people, but this isn’t remotely sufficient to explain their extreme tendencies to kiss only each other. It is their commitment that explains this behavior gap, i.e., this extra strong preference for each other.

Now consider trying to predict policies and public attitudes regarding limits on who can migrate where, and who can buy products and services from where. And consider trying to predict this using the foreseeable concrete consequences of such policy limits. In principle, many factors seem relevant. Different kinds of people and products might produce different externalities in different situations. Their quality might be uncertain and depend on various features. One might naturally want a process to consider potential candidates and review their suitability.

Such models might predict more limits on people and products that come from further away in spatial and cultural distance, more limits on things that have lower quality and higher risks, and more limits when there is more infrastructure to help enforce such limits. And in fact those sort of models seem to do okay at predicting the following two kinds of variation: variation on limits on people and products that move between nations, and variation on limits on people and products that move within nations.

However, if we compare limits between nations and limits within nations, these sort of models seem to me to have a big explanatory gap, analogous to the kissing attractiveness gap in models that predict the kisses of married spouses. Between nations, the default is to have substantial limits on the movement of people and products, while within nations the strong default is to allow unlimited movement of people and products.

Yes, the context of movement between nations seems to be on average different from movement within nations, and different in the directions predicted to result in bigger limits on movement. At least according to the models we would use to that explain such variation between nations, and variation within nations. But while the directions make sense, the magnitudes are strangely enormous. A similar degree of difference within a nation results in far smaller limits on the movement of people and products than does a comparable degree of difference between nations.

We are thus left with another explanatory gap: we need something else to explain why people are so reluctant to allow movement between nations, relative to movement within nations. And my best guess is that the answer here is another kind of commitment: people feel that they have committed to allowing movement within nations, even if that causes problems, and have committed to being suspicious of movement between nations, even if that makes them lose out on opportunities. That is part of what it means to have committed themselves to by joining a nation.

If this explanation is correct, it of course raises the question of whether this is a sensible commitment to make. For that, we need a better analysis of the benefits and costs of committing to joining nations, an under-explored but important topic.

GD Star Rating
loading...
Tagged as: , ,

A Coming Hypocralypse?

Many people have been working hard for a long time to develop tech that helps to read people’s feelings. They are working on ways to read facial expressions, gazes, word choices, tones of voice, sweat, skin conductance, gait, nervous habits, and many other body features and motions. Over the coming years, we should expect this tech to consistently get cheaper and better at reading more subtler feelings of more people in more kinds of contexts more reliably.

Much of this tech will be involuntary. While your permission and assistance may help such tech to read you better, others will often be able to read you using tech that they control, on their persons or and in the buildings around you. They can use tech integrated with other complex systems that is thus hard to monitor and regulate. Yes, some defenses are possible, such as via wearing dark sunglasses or burqas, and electronically modulating your voice. But such options seem rather awkward and I doubt most people will be willing to use them much in most familiar social situations. And I doubt that regulation will greatly reduce the use of this tech. The overall trend seems clear: our true feelings will become more visible to people around us.

We are often hypocritical about our feelings. That is, we pretend to some degree to have certain acceptable public feelings, while actually harboring different feelings. Most people know that this happens often, but our book The Elephant in the Brain suggests that we still vastly underestimate typical levels of hypocrisy. We all mask our feelings a lot, quite often from ourselves. (See our book for many more details.)

These two facts, better tech for reading feelings and widespread hypocrisy, seem to me to be on a collision course. As a result, within a few decades, we may see something of a “hypocrisy apocalypse”, or “hypocralypse”, wherein familiar ways to manage hypocrisy become no longer feasible, and collide with common norms, rules, and laws. In this post I want to outline some of the problems we face.

Long ago, I was bullied as a child. And so I know rather well that one of the main defenses that children develop to protect themselves against bullies is to learn to mask their feelings. Bullies tend to see kids who are visibly scared or distraught as openly inviting them to bully. Similarly, many adults protect themselves from salespeople and sexual predators by learning to mask their feelings. Masked feelings also helps us avoid conflict with rivals at work and in other social circles. For example, we learn to not visibly insult or disrespect big people in rowdy bars if we don’t want to get beaten up.

Tech that unmasks feelings threatens to weaken the protections that masked feelings provide. That big guy in a rowdy bar may use new tech to see that everyone else there can see that you despise him, and take offense. You bosses might see your disrespect for them, or your skepticism regarding their new initiatives. Your church could see that you aren’t feeling very religious at church service. Your school and nation might see that your pledge of allegiance was not heart-felt. And so on.

While these seem like serious issues, change will be mostly gradual and so we may have time to flexibly search in the space of possible adaptations. We can try changing with whom we meet how for what purposes, and what topics we consider acceptable to discuss where. We can be more selective who we make more visible and how.

I worry more about collisions between better tech for reading feelings and common social norms, rules, and laws. Especially norms and laws that we adopt for more symbolic purposes, instead of to actually manage our interactions. These things tend to be less responsive to changing conditions.

For example, today we often consider it to be unacceptable “sexual harassment” to repeatedly and openly solicit work associates for sex, especially after they’ve clearly rejected the solicitor. We typically disapprove not just of direct requests, but also of less direct but relatively clear invitation reminders, such as visible leers, sexual jokes, and calling attention to your “junk”. And of course such rules make a great deal of sense.

But what happens when tech can make it clearer who is sexually attracted how much to whom? If the behavior that led to these judgements was completely out each person’s control, it might be hard to blame on anyone. We might then socially pretend that it doesn’t exist, though we might eagerly check it out privately. Unfortunately, our behavior will probably continue to modulate the processes that produce such judgements.

For example, the systems that judge how attracted you are to someone might focus on the moments when you directly look at that person, when your face is clearly visible to some camera, under good lighting. Without your wearing sunglasses or a burqa. So the longer you spend directly looking at someone under such conditions, the better the tech will be able to see your attraction. As a result, your choice to spend more time looking directly at them under favorable reading conditions might be seen as an intentional act, a choice to send the message that you are sexually attracted to them. And thus your continuing to do so after they have clearly rejected you might be seen as sexual harassment.

Yes, a reasonable world might adjust rules on sexual harassment to account for many complex changing conditions. But we may not live in a reasonable world. I’m not making any specific claims about sexual harassment rules, but symbolic purposes influence many of the norms and laws that we adopt. That is, we often support such rules not because of the good consequences of having them, but because we like the way that our personal support for such rules makes us look personally. For example, many support laws against drugs and prostitution even when they believe that such laws do little to discourage such things. They want to be personally seen as publicly taking a stand against such behavior.

Consider rules against expressing racism and sexism. And remember that the usual view is that everyone is at least a bit racist and sexist, in part because they live in a racist and sexist society. What happens when we can collect statistics on each person regarding how their visible evaluations of the people around them correlate with the race and sex of those people? Will we then punish white males for displaying statistically-significantly low opinions of non-whites and non-males via their body language? (That’s like a standard we often apply to firms today.) As with sexual harassment, the fact that people can moderate these readings via their behaviors may make these readings seem to count as intentional acts. Especially since they can be tracking the stats themselves, to see the impression they are giving off. To some degree they choose to visibly treat certain people around them with disrespect. And if we are individually eager to show that we personally disapprove of racism and sexism, we may publicly support strict application of such rules even if that doesn’t actually deal well with real problems of racism and sexism in the world.

Remember that this tech should improve gradually. So for the first cases that set key precedents, the tech will be weak and thus flag very few people as clearly harassers or racists or sexists. And those few exceptions are much more likely to be people who actually did intend to harass and express racism or sexism, and who embody extreme versions of such behavior. While they will also probably tend to be people who are weird and non-conformist in other ways, this tech for reading feelings may initially seem to do well to help us identify and deal with problematic people. For example, we may be glad that tech can identity the priests who most clearly lust after the young boys around them.

But as the tech gets better it will slowly be able to flag more and more people as sending disapproved messages. The rate will drift upward from one person in ten thousand to one in a thousand to one percent and so on. People may then start to change their behavior in bigger ways, to avoid being flagged, but that may be too little too late, especially if large video, etc. libraries of old behaviors are available to process with new methods.

At this point we may reach a “hypocralypse”, where rules that punish hypocrisy collide in a big way with tech that can expose hypocrisy. That is, where tech that can involuntarily show our feelings intersects with norms and laws that punish the expression of common but usually hidden feelings. Especially when such rules are in part symbolically motivated.

What happens then, I don’t know. Do white males start wearing burqas, do we regulate this tech heavily, or do we tone down and relax our many symbolic rules? I’ll hope for the best, but I still fear the worst.

GD Star Rating
loading...
Tagged as: , , ,

Separate Top-Down, Bottom-Up Brain Credit

Recently I decided to learn more about brain structure and organization, especially in humans. As modularity is a key concept in complex systems, a key question is: what organizing principles explain which parts are connected how strongly to which other parts? (Which in brains is closely related to which parts are physically close to which other parts.) Here are some things I’ve learned, most of which are well known, but one of which might be new.

One obvious modularity principle is functional relation: stuff related to achieving similar functions tends to be connected more to each other. For example, stuff dealing with vision tends to be near other stuff dealing with vision. But as large areas of the brain light up when we do most anything, this clearly isn’t the only organizing principle.

A second organizing principle seems clear: collect things at similar levels of abstraction. The rear parts of our brains tend to focus more on small near concrete details while the front parts of our brain tend to focus on big far abstractions. In between, the degree of abstraction tends to change gradually. This organizing principle is also important in recent deep learning methods, and it predicts the effects seen in construal level theory: when we think about one thing at a certain level of abstraction and distance, we tend to think of related things at similar levels of abstraction and distance. After all, it is easier for activity in one brain region to trigger activity in nearby regions. The trend to larger brains, culminating in humans, has been accompanied by a trend toward larger brain regions that focus on abstractions; we humans think more abstractly than do other animals.

A key fact about human brain organization is that the brain is split into two similar but weakly connected hemispheres. This is strange, as usually we’d think that, all else equal, for coordination purposes each brain module wants to be as close as possible to every other module. What organizing principle can explain this split?

There seems to be a lot of disagreement on how best to summarize how the hemispheres differ. Here are two summaries:

The left hemisphere deals with hard facts: abstractions, structure, discipline and rules, time sequences, mathematics, categorizing, logic and rationality and deductive reasoning, knowledge, details, definitions, planning and goals, words (written and spoken and heard), productivity and efficiency, science and technology, stability, extraversion, physical activity, and the right side of the body. … The right hemisphere specializes in … intuition, feelings and sensitivity, emotions, daydreaming and visualizing, creativity (including art and music), color, spatial awareness, first impressions, rhythm, spontaneity and impulsiveness, the physical senses, risk-taking, flexibility and variety, learning by experience, relationships, mysticism, play and sports, introversion, humor, motor skills, the left side of the body, and a holistic way of perception that recognizes patterns and similarities and then synthesizes those elements into new forms. (more)

The [left] is centered around action and is often the driving force behind risky behaviors. This hemisphere heavily relies upon emotional input leading it to make brash and uncalculated decisions. … The [right] … relies primarily on critical thinking and calculations to reach its decisions.[11] As such the conclusions reached by the [right] often result in avoidance of risk taking behaviors and overall inaction. … . In environments of scarcity, … taking risks is the foundational approach to survival. … However, in environments of abundance, as humans have observed, it is far more likely to die to damaging stimuli. … In areas of prosperity, … [right] domination is prevalent. … In areas of scarcity where cold and limited food are concerns [left] domination is prevalent. (more)

After reading a bit, I tentatively summarize the difference as: the right hemisphere tends to work bottom-up, while the left tends to work top-down. (In a certain sense of these terms.) Inference tends to be bottom-up, in that we aggregate many complex details into inferring fewer bigger things. For example, in a visual scene we start from a movie of pixels over time, and search for sets of possible objects and their motions that can make sense of this movie. In contrast, design tends to be top-down, in that to design a path to get us from here to there, we start with an abstract description of our goal, such as the start and end of our path, and then search for concrete details that can achieve that goal.

The right hemisphere tends to watch, mostly looking out to infer danger, while the left tends to initiate action, and thus must design actions. The right has a wide span of attention, watching the world looking out for surprises, most of which are bad, while the left has a narrow focus of attention, which supports taking purposive action, from which it expects good results. So the right hemisphere tends to do bottom-up processing, while the left does top-down processing.

In bottom-up processing, to explain one set of details one must consider many possible sets of abstractions, while in top-down processing, one set of goals gives rise to many possible specific details to achieve those goals. As a result, we should expect bottom-up work to need more resources at high abstraction levels, while top-down work needs more resources at detailed levels. And it fact, this is what we see in brain structure: the right hemisphere has a larger front abstract end, while the left hemisphere has a larger back concrete end. Our brains are “twisted” in this predicted way.

Why would it make sense to separate bottom-up from top-down thinking? A key problem in the design of intelligent systems is that of how to distribute reward or credit. And a common solution to this problem is to create a standard of good in one part of the system, today often called a “cost function” in AI circles, and then reward or credit other parts of the system for getting closer to achieving that standard. In inference, the standard is typically some form of statistical fit: how well a model of the world predicts the data that one sees. In design, the standard is more naturally centered on goals: how well does a plan achieve its goals?

Top-down and bottom-up styles of processing seem to me to use incompatible systems of credit assignment. That is, it seems hard to design a system that simultaneously credits abstract world scenarios for predicting details seen, while also rewarding details chosen for achieving abstract goals. Credit assignment systems work better when they have a single common direction in which credit flows. One can allow multiple design goals at a similar high level of abstraction, as then the design process can give credit for synergy, and search for details that satisfy all the goals. And one can allow multiple sources of detail, like sight and sound, and combine their statistical credit to infer which objects are moving how. But it seems hard to combine the two systems of credit.

And so that is my proposal for a third organizing principle of brains: separate bottom-up from top-down systems of credit assignment. I haven’t heard anyone else say this, though I wouldn’t be surprised if someone has said it before.

Added 1Sep: The main risk of mixing credit directions is creating self-supporting credit cycles not well connected to real needs. This may be why the connections between the two hemispheres are mostly inhibitory, reducing activity.

GD Star Rating
loading...
Tagged as: ,

My Market Board Game

From roughly 1989 to 1992, I explored the concept of prediction markets (which I then called “idea futures”) in part via building and testing a board game. I thought I’d posted details on my game before, but searching I couldn’t find anything. So here is my board game.

The basic idea is simple: people bet on “who done it” while watching a murder mystery. So my game is an add-on to a murder mystery movie or play, or a game like How to Host a Murder. While watching the murder mystery, people stand around a board where they can reach in with their hands to directly and easily make bets on who done it. Players start with the same amount of money, and in the end whoever has the most money wins (or maybe wins in proportion to their winnings).

Together with Ron Fischer (now deceased) I tested this game a half-dozen times with groups of about a dozen. People understood it quickly and easily, and had fun playing. I looked into marketing the game, but was told that game firms do not listen to proposals by strangers, as they fear being sued later if they came out with a similar game. So I set the game aside.

All I really need to explain here is how mechanically to let people bet on who done it. First, you give all players 200 in cash, and from then on they have access to a “bank” where they can always make “change”:

Poker chips of various colors can represent various amounts, like 1, 5, 10, 25, or 100. In addition, you make similar-sized cards that read things like “Pays 100 if Andy is guilty.” There are different cards for different suspects in the murder mystery, each suspect with a different color card. The “bank” allows exchanges like trading two 5 chips for one 10 chip, or trading 100 in chips for a set of all the cards, one for each suspect.

Second, you make a “market board”, which is an array of slots, each of which can hold either chips or a card. If there were six suspects, an initial market board could look like this:

For this board, each column is about one of the six suspects, and each row is about one of these ten prices: 5,10,15,20,25,30,40,50,60,80. Here is a blow-up of one slot in the array:

Every slot holds either the kind of card for that column, or it holds the amount of chips for that row. The one rule of trading is: for any slot, anyone can swap the right card for the right amount of chips, or can make the opposite swap, depending on what is in the slot at the moment. The swap must be immediate; you can’t put your hand over a slot to reserve it while you get your act together.

This could be the market board near the end of the game:

Here the players have settled on Pam as most likely to have done it, and Fred as least likely. At the end, players compute their final score by combining their cash in chips with 100 for each winning card; losing cards are worth nothing. And that’s the game!

For the initial board, fill a row with chips when the number of suspects times the price for that row is less than 100, and fill that row with cards otherwise. Any number of suspects can work for the columns, and any ordered set of prices between 0 and 100 can work for the rows. I made my boards by taping together clear-color M512 boxes from Tap Plastics, and taping printed white paper on tops around the edge.

Added 30Aug: Here are a few observations about game play. 1) Many, perhaps most, players were so engaged by “day trading” in this market that they neglected to watch and think enough about the murder mystery. 2) You can allow players to trade directly with each other, but players show little interest in doing this. 3) Players found it more natural to buy than to sell. As a result, prices drifted upward, and often the sum of the buy prices for all the suspects was over 100. An electronic market maker could ensure that such arbitrage opportunities never arise, but in this mechanical version some players specialized in noticing and correcting this error.

Added 31Aug: A twitter poll picked a name for this game: Murder, She Bet.

Added 9Sep: Expert gamer Zvi Mowshowitz gives a detailed analysis of this game. He correctly notes that incentives for accuracy are lower in the endgame, though I didn’t notice substantial problems with endgame accuracy in the trials I ran.

GD Star Rating
loading...
Tagged as: , ,