Commitments Explain Gaps

Consider trying to predict the details of unattached people’s kisses. That is, you might have data on who such people have actually kissed when, where, and how, and data on who they say they would be willing to kiss under what circumstances. From such data you make models that predict both the kisses that actually happen and the kisses they say they are willing to join. For example, you may notice that they kiss more when they are awake, are not busy with other activities, and feeling frisky. They kiss more when they and their partner are clean and well groomed. They kiss more when they are more attractive to others, and when other willing partners are more attractive to them according to their preferences.

Now consider doing the same exercise for people who are married. When you fit this sort of data, you will find one new big factor: they almost always kiss only their spouse. And if you try to explain both these datasets in the same terms, you’d have to say spouses are in some strange way vastly more attracted to each other than they are to everyone else. This attraction is strange because it isn’t explained by other measurable features you can see, and no one else seems to feel this extra attraction.

Of course the obvious explanation here is that married people typically make a commitment to kiss only each other. Yes there is a sense in which they are attracted more to each other than to other people, but this isn’t remotely sufficient to explain their extreme tendencies to kiss only each other. It is their commitment that explains this behavior gap, i.e., this extra strong preference for each other.

Now consider trying to predict policies and public attitudes regarding limits on who can migrate where, and who can buy products and services from where. And consider trying to predict this using the foreseeable concrete consequences of such policy limits. In principle, many factors seem relevant. Different kinds of people and products might produce different externalities in different situations. Their quality might be uncertain and depend on various features. One might naturally want a process to consider potential candidates and review their suitability.

Such models might predict more limits on people and products that come from further away in spatial and cultural distance, more limits on things that have lower quality and higher risks, and more limits when there is more infrastructure to help enforce such limits. And in fact those sort of models seem to do okay at predicting the following two kinds of variation: variation on limits on people and products that move between nations, and variation on limits on people and products that move within nations.

However, if we compare limits between nations and limits within nations, these sort of models seem to me to have a big explanatory gap, analogous to the kissing attractiveness gap in models that predict the kisses of married spouses. Between nations, the default is to have substantial limits on the movement of people and products, while within nations the strong default is to allow unlimited movement of people and products.

Yes, the context of movement between nations seems to be on average different from movement within nations, and different in the directions predicted to result in bigger limits on movement. At least according to the models we would use to that explain such variation between nations, and variation within nations. But while the directions make sense, the magnitudes are strangely enormous. A similar degree of difference within a nation results in far smaller limits on the movement of people and products than does a comparable degree of difference between nations.

We are thus left with another explanatory gap: we need something else to explain why people are so reluctant to allow movement between nations, relative to movement within nations. And my best guess is that the answer here is another kind of commitment: people feel that they have committed to allowing movement within nations, even if that causes problems, and have committed to being suspicious of movement between nations, even if that makes them lose out on opportunities. That is part of what it means to have committed themselves to by joining a nation.

If this explanation is correct, it of course raises the question of whether this is a sensible commitment to make. For that, we need a better analysis of the benefits and costs of committing to joining nations, an under-explored but important topic.

GD Star Rating
loading...
Tagged as: , ,

A Coming Hypocralypse?

Many people have been working hard for a long time to develop tech that helps to read people’s feelings. They are working on ways to read facial expressions, gazes, word choices, tones of voice, sweat, skin conductance, gait, nervous habits, and many other body features and motions. Over the coming years, we should expect this tech to consistently get cheaper and better at reading more subtler feelings of more people in more kinds of contexts more reliably.

Much of this tech will be involuntary. While your permission and assistance may help such tech to read you better, others will often be able to read you using tech that they control, on their persons or and in the buildings around you. They can use tech integrated with other complex systems that is thus hard to monitor and regulate. Yes, some defenses are possible, such as via wearing dark sunglasses or burqas, and electronically modulating your voice. But such options seem rather awkward and I doubt most people will be willing to use them much in most familiar social situations. And I doubt that regulation will greatly reduce the use of this tech. The overall trend seems clear: our true feelings will become more visible to people around us.

We are often hypocritical about our feelings. That is, we pretend to some degree to have certain acceptable public feelings, while actually harboring different feelings. Most people know that this happens often, but our book The Elephant in the Brain suggests that we still vastly underestimate typical levels of hypocrisy. We all mask our feelings a lot, quite often from ourselves. (See our book for many more details.)

These two facts, better tech for reading feelings and widespread hypocrisy, seem to me to be on a collision course. As a result, within a few decades, we may see something of a “hypocrisy apocalypse”, or “hypocralypse”, wherein familiar ways to manage hypocrisy become no longer feasible, and collide with common norms, rules, and laws. In this post I want to outline some of the problems we face.

Long ago, I was bullied as a child. And so I know rather well that one of the main defenses that children develop to protect themselves against bullies is to learn to mask their feelings. Bullies tend to see kids who are visibly scared or distraught as openly inviting them to bully. Similarly, many adults protect themselves from salespeople and sexual predators by learning to mask their feelings. Masked feelings also helps us avoid conflict with rivals at work and in other social circles. For example, we learn to not visibly insult or disrespect big people in rowdy bars if we don’t want to get beaten up.

Tech that unmasks feelings threatens to weaken the protections that masked feelings provide. That big guy in a rowdy bar may use new tech to see that everyone else there can see that you despise him, and take offense. You bosses might see your disrespect for them, or your skepticism regarding their new initiatives. Your church could see that you aren’t feeling very religious at church service. Your school and nation might see that your pledge of allegiance was not heart-felt. And so on.

While these seem like serious issues, change will be mostly gradual and so we may have time to flexibly search in the space of possible adaptations. We can try changing with whom we meet how for what purposes, and what topics we consider acceptable to discuss where. We can be more selective who we make more visible and how.

I worry more about collisions between better tech for reading feelings and common social norms, rules, and laws. Especially norms and laws that we adopt for more symbolic purposes, instead of to actually manage our interactions. These things tend to be less responsive to changing conditions.

For example, today we often consider it to be unacceptable “sexual harassment” to repeatedly and openly solicit work associates for sex, especially after they’ve clearly rejected the solicitor. We typically disapprove not just of direct requests, but also of less direct but relatively clear invitation reminders, such as visible leers, sexual jokes, and calling attention to your “junk”. And of course such rules make a great deal of sense.

But what happens when tech can make it clearer who is sexually attracted how much to whom? If the behavior that led to these judgements was completely out each person’s control, it might be hard to blame on anyone. We might then socially pretend that it doesn’t exist, though we might eagerly check it out privately. Unfortunately, our behavior will probably continue to modulate the processes that produce such judgements.

For example, the systems that judge how attracted you are to someone might focus on the moments when you directly look at that person, when your face is clearly visible to some camera, under good lighting. Without your wearing sunglasses or a burqa. So the longer you spend directly looking at someone under such conditions, the better the tech will be able to see your attraction. As a result, your choice to spend more time looking directly at them under favorable reading conditions might be seen as an intentional act, a choice to send the message that you are sexually attracted to them. And thus your continuing to do so after they have clearly rejected you might be seen as sexual harassment.

Yes, a reasonable world might adjust rules on sexual harassment to account for many complex changing conditions. But we may not live in a reasonable world. I’m not making any specific claims about sexual harassment rules, but symbolic purposes influence many of the norms and laws that we adopt. That is, we often support such rules not because of the good consequences of having them, but because we like the way that our personal support for such rules makes us look personally. For example, many support laws against drugs and prostitution even when they believe that such laws do little to discourage such things. They want to be personally seen as publicly taking a stand against such behavior.

Consider rules against expressing racism and sexism. And remember that the usual view is that everyone is at least a bit racist and sexist, in part because they live in a racist and sexist society. What happens when we can collect statistics on each person regarding how their visible evaluations of the people around them correlate with the race and sex of those people? Will we then punish white males for displaying statistically-significantly low opinions of non-whites and non-males via their body language? (That’s like a standard we often apply to firms today.) As with sexual harassment, the fact that people can moderate these readings via their behaviors may make these readings seem to count as intentional acts. Especially since they can be tracking the stats themselves, to see the impression they are giving off. To some degree they choose to visibly treat certain people around them with disrespect. And if we are individually eager to show that we personally disapprove of racism and sexism, we may publicly support strict application of such rules even if that doesn’t actually deal well with real problems of racism and sexism in the world.

Remember that this tech should improve gradually. So for the first cases that set key precedents, the tech will be weak and thus flag very few people as clearly harassers or racists or sexists. And those few exceptions are much more likely to be people who actually did intend to harass and express racism or sexism, and who embody extreme versions of such behavior. While they will also probably tend to be people who are weird and non-conformist in other ways, this tech for reading feelings may initially seem to do well to help us identify and deal with problematic people. For example, we may be glad that tech can identity the priests who most clearly lust after the young boys around them.

But as the tech gets better it will slowly be able to flag more and more people as sending disapproved messages. The rate will drift upward from one person in ten thousand to one in a thousand to one percent and so on. People may then start to change their behavior in bigger ways, to avoid being flagged, but that may be too little too late, especially if large video, etc. libraries of old behaviors are available to process with new methods.

At this point we may reach a “hypocralypse”, where rules that punish hypocrisy collide in a big way with tech that can expose hypocrisy. That is, where tech that can involuntarily show our feelings intersects with norms and laws that punish the expression of common but usually hidden feelings. Especially when such rules are in part symbolically motivated.

What happens then, I don’t know. Do white males start wearing burqas, do we regulate this tech heavily, or do we tone down and relax our many symbolic rules? I’ll hope for the best, but I still fear the worst.

GD Star Rating
loading...
Tagged as: , , ,

Separate Top-Down, Bottom-Up Brain Credit

Recently I decided to learn more about brain structure and organization, especially in humans. As modularity is a key concept in complex systems, a key question is: what organizing principles explain which parts are connected how strongly to which other parts? (Which in brains is closely related to which parts are physically close to which other parts.) Here are some things I’ve learned, most of which are well known, but one of which might be new.

One obvious modularity principle is functional relation: stuff related to achieving similar functions tends to be connected more to each other. For example, stuff dealing with vision tends to be near other stuff dealing with vision. But as large areas of the brain light up when we do most anything, this clearly isn’t the only organizing principle.

A second organizing principle seems clear: collect things at similar levels of abstraction. The rear parts of our brains tend to focus more on small near concrete details while the front parts of our brain tend to focus on big far abstractions. In between, the degree of abstraction tends to change gradually. This organizing principle is also important in recent deep learning methods, and it predicts the effects seen in construal level theory: when we think about one thing at a certain level of abstraction and distance, we tend to think of related things at similar levels of abstraction and distance. After all, it is easier for activity in one brain region to trigger activity in nearby regions. The trend to larger brains, culminating in humans, has been accompanied by a trend toward larger brain regions that focus on abstractions; we humans think more abstractly than do other animals.

A key fact about human brain organization is that the brain is split into two similar but weakly connected hemispheres. This is strange, as usually we’d think that, all else equal, for coordination purposes each brain module wants to be as close as possible to every other module. What organizing principle can explain this split?

There seems to be a lot of disagreement on how best to summarize how the hemispheres differ. Here are two summaries:

The left hemisphere deals with hard facts: abstractions, structure, discipline and rules, time sequences, mathematics, categorizing, logic and rationality and deductive reasoning, knowledge, details, definitions, planning and goals, words (written and spoken and heard), productivity and efficiency, science and technology, stability, extraversion, physical activity, and the right side of the body. … The right hemisphere specializes in … intuition, feelings and sensitivity, emotions, daydreaming and visualizing, creativity (including art and music), color, spatial awareness, first impressions, rhythm, spontaneity and impulsiveness, the physical senses, risk-taking, flexibility and variety, learning by experience, relationships, mysticism, play and sports, introversion, humor, motor skills, the left side of the body, and a holistic way of perception that recognizes patterns and similarities and then synthesizes those elements into new forms. (more)

The [left] is centered around action and is often the driving force behind risky behaviors. This hemisphere heavily relies upon emotional input leading it to make brash and uncalculated decisions. … The [right] … relies primarily on critical thinking and calculations to reach its decisions.[11] As such the conclusions reached by the [right] often result in avoidance of risk taking behaviors and overall inaction. … . In environments of scarcity, … taking risks is the foundational approach to survival. … However, in environments of abundance, as humans have observed, it is far more likely to die to damaging stimuli. … In areas of prosperity, … [right] domination is prevalent. … In areas of scarcity where cold and limited food are concerns [left] domination is prevalent. (more)

After reading a bit, I tentatively summarize the difference as: the right hemisphere tends to work bottom-up, while the left tends to work top-down. (In a certain sense of these terms.) Inference tends to be bottom-up, in that we aggregate many complex details into inferring fewer bigger things. For example, in a visual scene we start from a movie of pixels over time, and search for sets of possible objects and their motions that can make sense of this movie. In contrast, design tends to be top-down, in that to design a path to get us from here to there, we start with an abstract description of our goal, such as the start and end of our path, and then search for concrete details that can achieve that goal.

The right hemisphere tends to watch, mostly looking out to infer danger, while the left tends to initiate action, and thus must design actions. The right has a wide span of attention, watching the world looking out for surprises, most of which are bad, while the left has a narrow focus of attention, which supports taking purposive action, from which it expects good results. So the right hemisphere tends to do bottom-up processing, while the left does top-down processing.

In bottom-up processing, to explain one set of details one must consider many possible sets of abstractions, while in top-down processing, one set of goals gives rise to many possible specific details to achieve those goals. As a result, we should expect bottom-up work to need more resources at high abstraction levels, while top-down work needs more resources at detailed levels. And it fact, this is what we see in brain structure: the right hemisphere has a larger front abstract end, while the left hemisphere has a larger back concrete end. Our brains are “twisted” in this predicted way.

Why would it make sense to separate bottom-up from top-down thinking? A key problem in the design of intelligent systems is that of how to distribute reward or credit. And a common solution to this problem is to create a standard of good in one part of the system, today often called a “cost function” in AI circles, and then reward or credit other parts of the system for getting closer to achieving that standard. In inference, the standard is typically some form of statistical fit: how well a model of the world predicts the data that one sees. In design, the standard is more naturally centered on goals: how well does a plan achieve its goals?

Top-down and bottom-up styles of processing seem to me to use incompatible systems of credit assignment. That is, it seems hard to design a system that simultaneously credits abstract world scenarios for predicting details seen, while also rewarding details chosen for achieving abstract goals. Credit assignment systems work better when they have a single common direction in which credit flows. One can allow multiple design goals at a similar high level of abstraction, as then the design process can give credit for synergy, and search for details that satisfy all the goals. And one can allow multiple sources of detail, like sight and sound, and combine their statistical credit to infer which objects are moving how. But it seems hard to combine the two systems of credit.

And so that is my proposal for a third organizing principle of brains: separate bottom-up from top-down systems of credit assignment. I haven’t heard anyone else say this, though I wouldn’t be surprised if someone has said it before.

Added 1Sep: The main risk of mixing credit directions is creating self-supporting credit cycles not well connected to real needs. This may be why the connections between the two hemispheres are mostly inhibitory, reducing activity.

GD Star Rating
loading...
Tagged as: ,

My Market Board Game

From roughly 1989 to 1992, I explored the concept of prediction markets (which I then called “idea futures”) in part via building and testing a board game. I thought I’d posted details on my game before, but searching I couldn’t find anything. So here is my board game.

The basic idea is simple: people bet on “who done it” while watching a murder mystery. So my game is an add-on to a murder mystery movie or play, or a game like How to Host a Murder. While watching the murder mystery, people stand around a board where they can reach in with their hands to directly and easily make bets on who done it. Players start with the same amount of money, and in the end whoever has the most money wins (or maybe wins in proportion to their winnings).

Together with Ron Fischer (now deceased) I tested this game a half-dozen times with groups of about a dozen. People understood it quickly and easily, and had fun playing. I looked into marketing the game, but was told that game firms do not listen to proposals by strangers, as they fear being sued later if they came out with a similar game. So I set the game aside.

All I really need to explain here is how mechanically to let people bet on who done it. First, you give all players 200 in cash, and from then on they have access to a “bank” where they can always make “change”:

Poker chips of various colors can represent various amounts, like 1, 5, 10, 25, or 100. In addition, you make similar-sized cards that read things like “Pays 100 if Andy is guilty.” There are different cards for different suspects in the murder mystery, each suspect with a different color card. The “bank” allows exchanges like trading two 5 chips for one 10 chip, or trading 100 in chips for a set of all the cards, one for each suspect.

Second, you make a “market board”, which is an array of slots, each of which can hold either chips or a card. If there were six suspects, an initial market board could look like this:

For this board, each column is about one of the six suspects, and each row is about one of these ten prices: 5,10,15,20,25,30,40,50,60,80. Here is a blow-up of one slot in the array:

Every slot holds either the kind of card for that column, or it holds the amount of chips for that row. The one rule of trading is: for any slot, anyone can swap the right card for the right amount of chips, or can make the opposite swap, depending on what is in the slot at the moment. The swap must be immediate; you can’t put your hand over a slot to reserve it while you get your act together.

This could be the market board near the end of the game:

Here the players have settled on Pam as most likely to have done it, and Fred as least likely. At the end, players compute their final score by combining their cash in chips with 100 for each winning card; losing cards are worth nothing. And that’s the game!

For the initial board, fill a row with chips when the number of suspects times the price for that row is less than 100, and fill that row with cards otherwise. Any number of suspects can work for the columns, and any ordered set of prices between 0 and 100 can work for the rows. I made my boards by taping together clear-color M512 boxes from Tap Plastics, and taping printed white paper on tops around the edge.

Added 30Aug: Here are a few observations about game play. 1) Many, perhaps most, players were so engaged by “day trading” in this market that they neglected to watch and think enough about the murder mystery. 2) You can allow players to trade directly with each other, but players show little interest in doing this. 3) Players found it more natural to buy than to sell. As a result, prices drifted upward, and often the sum of the buy prices for all the suspects was over 100. An electronic market maker could ensure that such arbitrage opportunities never arise, but in this mechanical version some players specialized in noticing and correcting this error.

Added 31Aug: A twitter poll picked a name for this game: Murder, She Bet.

Added 9Sep: Expert gamer Zvi Mowshowitz gives a detailed analysis of this game. He correctly notes that incentives for accuracy are lower in the endgame, though I didn’t notice substantial problems with endgame accuracy in the trials I ran.

GD Star Rating
loading...
Tagged as: , ,

Spaceship Earth Explores Culture Space

Space: the final frontier. These are the voyages of the starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before. (more)

Many love science fiction stories of brave crews risking their lives to explore strange new spaces, stories much like the older adventure stories about European explorers risking their lives centuries ago to explore new places on Earth. (Yes, often to conquer and enslave the locals.) Many lament that we don’t have as many real such explorer stories today, and they say that we should support more human space exploration now in order to create such real heroic exploration stories. Even though human space exploration is crazy expensive now, and offers few scientific, economic, or humanity-survival gains anytime soon. They say the good stories will be worth all that cost.

Since Henry George first invoked it in 1879, many have used the metaphor of Spaceship Earth to call attention to our common vulnerability and limited resources:

Spaceship Earth … is a world view encouraging everyone on Earth to act as a harmonious crew working toward the greater good. … “we must all cooperate and see to it that everyone does his fair share of the work and gets his fair share of the provisions” … “We travel together, passengers on a little space ship, dependent on its vulnerable reserves of air and soil.” (more)

In this post, I want to suggest that Spaceship Earth is in fact a story of a brave crew risking much to explore a strange new territory. But the space we explore is more cultural than physical.

During the industrial era, the world economy has doubled roughly every fifteen years. Each such doubling of output has moved us into new uncharted cultural territory. This growth has put new pressures on our environment, and has resulted in large and rapid changes to our culture and social organization.

This growth results mostly from innovation, and most innovations are small and well tested against local conditions, giving us little reason to doubt their local value. But all these small changes add up to big overall moves that are often entangled with externalities, coordination failures, and other reasons to doubt their net value.

So humanity continues to venture out into new untried and risky cultural spaces, via changes to cultural conditions with which we don’t have much experience, and which thus risk disaster and destruction. The good crew of Spaceship Earth should carefully weigh these risks when considering where and how fast to venture.

Consider seven examples:

  1. While humans seem to be adapting reasonably well to global warming, we risk big lumpy disruptive changes to Atlantic currents and Antarctic ice. Ecosystems also seem to be adapting okay, but we are risking big collapses to them as well.
  2. While ancient societies gave plenty of status and rewards to fertility, today high fertility behaviors are mostly seen as low status. This change is entwined with complex changes in gender norms and roles, but one result is that human fertility is falling toward below replacement in much of the world, and may fall much further. Over centuries this might produce a drastic decrease in world population, and productivity-threatening decreases in the scale of world production.
  3. While the world has become much more peaceful over the last century, this has been accompanied by big declines in cultural support for military action and tolerance for military losses. Is the world now more vulnerable to conquest by a new military power with more local cultural support and tolerance for losses?
  4. Farmer era self-control and self-discipline has weakened over time, in part via weaker religion. This has weakened cultural support for work and cultural suspicion of self-indulgence in sex, drugs, and media. So we now see less work and more drug addiction. How far will we slide?
  5. Via new media, we are exploring brave new worlds of how to make friends, form identities, achieve status, and learn about the world. As many have noted, these new ways risk many harms to happiness and social capital.
  6. Innovation was once greatly aided by tinkering, i.e., the ability to take apart and change familiar devices. Such tinkering is much less feasible in modern devices. Increasing regulation and risk aversion is also interfering with innovation. Are we as a result risking cultural support for innovation?
  7. Competition between firms has powered rapid growth, but winning bets on intangible capital is allowing leading firms to increasingly dominate industries. Does this undermine the competition that we’ve relied on so far to power growth?

The most common framing today for such issues is one of cultural war. You ask yourself which side feels right to you, commiserate with your moral allies, then puff yourself up with righteous indignation against those who see things differently, and go to war with them. But we might do better to frame these as reasonable debates on how much to risk as we explore culture space.

In a common scene from exploration stories, a crew must decide if to take a big risk. Or choose among several risks. Some in the crew see a risk as worth the potential reward, while others want to search longer for better options, or retreat to try again another day. They may disagree on the tradeoff, but they all agree that both the risks and the rewards are real. It is just a matter of tradeoff details.

We might similarly frame key “value” debates as reasonable differing judgements on what chances to take as spaceship Earth explores culture space. Those who love new changes could admit that we are taking some chances in adopting them so quickly, with so little data to go on, while those who are suspicious of recent changes could admit that many seem to like their early effects. Rather than focus on directly evaluating changes, we might focus more on setting up tracking systems to watch for potential problems, and arranging for repositories of old culture practices that might help us to reverse changes if things go badly. And we might all see ourselves as part of a grand heroic adventure story, wherein a mostly harmonious crew explores a great strange cosmos of possible cultures.

GD Star Rating
loading...
Tagged as: , ,

If The Future Is Big

One way to predict the future is to find patterns in the past, and extend them into the future. And across the very long term history of everything, the one most robust pattern I see is: growth. Biology, and then humanity, has consistently grown in ability, capacity, and influence. Yes, there have been rare periods of widespread decline, but overall in the long run there has been far more growth than decline. 

We have good reasons to expect growth. Most growth is due to innovation, and once learned many innovations are hard to unlearn. Yes there have been some big widespread declines in history, such as the medieval Black Death and the decline of the Roman and Chinese empires at about the same time. But the historians who study the biggest such declines see them as surprisingly large, not surprisingly small. Knowing the details of those events, they would have been quite surprised to see such declines be ten times larger than as seen. Yes it is possible in principle that we’ve been lucky and most planets or species that start out like ours went totally extinct. But if smaller declines are more common than bigger ones, the lack of big but not total declines in our history suggests that the chances of extinction level declines was low. 

Yes, we should worry about the possibility of a big future decline soon. Perhaps due to global warming, resource exhaustion, falling fertility, or institutional rot. But this is mainly because the consequences would be so dire, not because such declines are likely. Even declines comparable in magnitude to the largest seen in history do not seem to me remotely sufficient to prevent the revival of long term growth afterward, as they do not prevent continued innovation. Thus while long-term growth is far from inevitable, it seems the most likely scenario to consider.

If growth is our most robust expectation for the future, what does that growth suggest or imply? The rest of this post summarizes many such plausible implications. There far more of them than many realize. 

Before I list the implications, consider an analogy. Imagine that you lived in a small mountain village, but that a huge city lie down in the valley below. While it might be hard to see or travel to that city, the existence of that city might still change your mountain village life in many important  ways. A big future can be like that big city to the village that is our current world. Now for those implications:   Continue reading "If The Future Is Big" »

GD Star Rating
loading...
Tagged as:

My Kind of Atheist

I think I’ve mentioned somewhere in public that I’m now an atheist, even though I grew up in a very Christian family, and I even joined a “cult” at a young age (against disapproving parents). The proximate cause of my atheism was learning physics in college. But I don’t think I’ve ever clarified in public what kind of an “atheist” or “agnostic” I am. So here goes.

The universe is vast and most of it is very far away in space and time, making our knowledge of those distant parts very thin. So it isn’t at all crazy to think that very powerful beings exist somewhere far away out there, or far before us or after us in time. In fact, many of us hope that we now can give rise to such powerful beings in the distant future. If those powerful beings count as “gods”, then I’m certainly open to the idea that such gods exist somewhere in space-time.

It also isn’t crazy to imagine powerful beings that are “closer” in space and time, but far away in causal connection. They could be in parallel “planes”, in other dimensions, or in “dark” matter that doesn’t interact much with our matter. Or they might perhaps have little interest in influencing or interacting with our sort of things. Or they might just “like to watch.”

But to most religious people, a key emotional appeal of religion is the idea that gods often “answer” prayer by intervening in their world. Sometimes intervening in their head to make them feel different, but also sometimes responding to prayers about their test tomorrow, their friend’s marriage, or their aunt’s hemorrhoids. It is these sort of prayer-answering “gods” in which I just can’t believe. Not that I’m absolutely sure they don’t exist, but I’m sure enough that the term “atheist” fits much better than the term “agnostic.”

These sort of gods supposedly intervene in our world millions of times daily to respond positively to particular prayers, and yet they do not noticeably intervene in world affairs. Not only can we find no physical trace of any machinery or system by which such gods exert their influence, even though we understand the physics of our local world very well, but the history of life and civilization shows no obvious traces of their influence. They know of terrible things that go wrong in our world, but instead of doing much about those things, these gods instead prioritize not leaving any clear evidence of their existence or influence. And yet for some reason they don’t mind people believing in them enough to pray to them, as they often reward such prayers with favorable interventions.

Yes, the space of possible minds is vast, as is the space of possible motivations. So yes somewhere in that space is a subspace of minds who would behave in exactly this manner, if they were powerful enough to count as “gods”. But the relative size of that subspace seems to me rather small, relative to that total space. And so the prior probability that all or most nearby gods have this sort of strange motivation also seems to me quite small. It seems a crazy implausible hypothesis.

Yes, the fact that people claim to feel that gods answer their prayers is, all else equal, evidence for that hypothesis. But the other obvious hypothesis to consider here is that people claim this because it comforts them to believe so, not because they’ve carefully studied their evidence. Long ago people had much less evidence on physics and the universe, and for them it was both plausible and socially functional to believe in powerful gods who sometimes responded to humans, including their prayers. This belief became deeply embedded in cultures, cultures which just do not respond very quickly or strongly to recent changes in our best evidence on physics and the universe. (Though they respond quickly enough to make up excuses like “God wants you to believe in him for special reasons.”) And so many still believe that gods answer prayers.

In conclusion, it isn’t crazy to think there are powerful gods far away in space or time, and perhaps close but far in causal connection. But it does seem to me crazy to believe in gods nearby who favorably answer prayers, but who also hide and don’t intervene much in world affairs. That hypothesis seems vastly less likely than the obvious alternative, of slowly updating cultures.

I expect my position to be pretty widely held among thoughtful intellectuals; can we find a good name for it? Prayer-atheists perhaps?

GD Star Rating
loading...
Tagged as:

Sanctimonious Econ Critics

The New Yorker review of Elephant in the Brain raved about Cents and Sensibility, by Gary Morson and Morton Shapiro, a book said to confirm that “intellectual overextension is often found in economics.” Others have similarly raved. But I don’t care much for this book, so let me explain why. (Be warned: this post is LONG.)

In its first sentence, the book declares its aim:

This book creates a dialogue between two fields that rarely have anything to say to each other: economics and the humanities. We mean to show how that dialogue could be conducted and why it has a great deal to contribute. (p.1)

Morson and Shapiro seem to want the sort of “dialogue” where one side talks and the other just listens. All but one chapter elaborates how economists should listen to the humanities, and the one remaining chapter is on how some parts of the humanities should listen to another part, not to economists. There’s only a two page section near the end on “What Humanists Can Learn From Economists,” which even then can’t resist talking more about what economists can learn:

Economists could learn from humanists the complexity of ethical issues, the need for stories, the importance of empathy, and the value of unformalizable good judgement. But humanists could also learn from economists how to think about scarce resources, about the nature of efficiency, and the importance of rational decision making. (p.261)

So what exactly can we economists learn? Continue reading "Sanctimonious Econ Critics" »

GD Star Rating
loading...
Tagged as: , ,

Compulsory Licensing Of Backroom IT?

We now understand one of the main reasons that many leading firms have been winning relative to others, resulting in higher markups, profits, and wage inequality:

The biggest companies in every field are pulling away from their peers faster than ever, sucking up the lion’s share of revenue, profits and productivity gains. Economists have proposed many possible explanations: top managers flocking to top firms, automation creating an imbalance in productivity, merger-and-acquisition mania, lack of antitrust regulation and more. But new data suggests that … IT spending that goes into hiring developers and creating software owned and used exclusively by a firm is the key competitive advantage. It’s different from our standard understanding of R&D in that this software is used solely by the company, and isn’t part of products developed for its customers.

Today’s big winners went all in. …Tech companies such as Google, Facebook, Amazon and Apple—as well as other giants including General Motors and Nissan in the automotive sector, and Pfizer and Roche in pharmaceuticals—built their own software and even their own hardware, inventing and perfecting their own processes instead of aligning their business model with some outside developer’s idea of it. … “IT intensity,” is relevant not just in the U.S. but across 25 other countries as well. …

When new technologies were developed in the past, they would diffuse to other firms fast enough so that productivity rose across entire industries. … But imagine instead of power looms, someone is trying to copy and reproduce Google’s cloud infrastructure itself. … Things have just gotten too complicated. The technologies we rely on now are massive and inextricably linked to the engineers, workers, systems and business models built around them. … While in the past it might have been possible to license, steal or copy someone else’s technology, these days that technology can’t be separated from the systems of which it’s a part. … Walmart built an elaborate logistics system around bar code scanners, which allowed it to beat out smaller retail rivals. Notably, it never sold this technology to any competitors. (more)

A policy paper goes into more detail. First, why is the IT of some firms so much better?

Proprietary IT thus provides a specific mechanism that can help explain the reallocation to more productive firms, rising industry concentration, also growing productivity dispersion between firms within industries, and growing profit margins. … There is a significant literature that identifies IT-related differences in productivity arising from complementary skills, managerial practices, and business models that are themselves unevenly distributed. Skills and managerial knowledge needed to use major new technologies have often been unevenly distributed initially because much must be learned through experience, which tends to differ substantially from firm to firm.

Yes, skills vary, but there are also just big random factors in the success of large IT systems, even for similar skills. What can we do about all this?

While there may be other reasons to question antitrust policies, the general rise in industry concentration does not appear to raise troubling issues for antitrust enforcement at this point by itself. …

Both IP law and antitrust law pay heed to … balancing innovation incentives against the need for disclosure and competition, balancing concerns about market power against considerations of efficiency. … This balance has been lost with regard to information technology … the policy challenge is to offset this trend. … This problem might require some lessening of innovation incentives. … The challenge both today and in the future for both IP and antitrust policy is to facilitate the diffusion of new technical knowledge and right now the trend seems to be in the wrong direction. …

To the extent that rising use of employee noncompete agreements limits the ability of technical employees to take their skills to new firms, diffusion is slowed. Similarly, for extensions of trade secrecy law to cover knowhow or the presumption of inevitable disclosure. Patents are required to disclose the technical information needed to “enable” the invention, but perhaps these requirements are ineffective, especially in IT fields. And if patents are not licensed, they become a barrier to diffusion. Perhaps some forms of compulsory licensing might overcome this problem. Moreover, machine learning technologies portend even greater difficulties encouraging diffusion in the future because use of these technologies requires not only skilled employees, but also access to critical large datasets.

It seems that making good backroom software, to use internally, has become something of a natural monopoly. Creating such IT has large fixed costs and big random factors. So an obvious question is whether we can usefully regulate this natural monopoly. And one standard approach to regulating monopolies is to force them to sell to everyone at regulated prices. Which in this context we call “compulsory licensing”; firms could be forced to lease their backroom IT to other firms in the same industry at regulated prices.

Note that while compulsory licensing of patents is rare in the US, it is common worldwide, and it one of the reasons that US drug firms get proportionally less of their revenue from outside the US; other nations force them to license their patents at particular low prices. So worldwide there is a lot of precedent for compulsory licensing.

The article above claimed that backroom IT is:

inextricably linked to the engineers, workers, systems and business models built around them. … While in the past it might have been possible to license, steal or copy someone else’s technology, these days that technology can’t be separated from the systems of which it’s a part.

I’m not yet convinced of this, and so I want to hear independent IT folks weigh in on this key question. I can see that different IT subsystems could be mixed up with each other, but I’m less convinced that the total set of backroom IT of a firm depends that much on its particular products and services. Maybe other firms in an industry would have to take the entire backroom IT bundle of the leading firm, rather than being able to pick and choose among subsystems. But when the leading IT bundle is so much better, I could see this option being attractive to the other firms.

The leading firm might incur some costs in making its IT package modular enough to separate it from its particular products and services. But such modularity is a good design discipline, and a compulsory licensing regime could compensate firms for such costs.

Note that I’m not saying that it is obvious that this is a good solution. I’m just saying that this is a standard obvious policy response to consider, so someone should be looking into it. At the moment I’m not seeing other good options, aside from just accepting the increased IT-induced firm inequality and its many consequences.

Added 12:30: Okay, so far the pretty consistent answer I’ve heard is that it is very hard to take software written for internal use and make it available for outside use. Even if you insist outsiders do things your way.

So assuming we are stuck with industry leaders winning big compared to others due to better IT, one worry for the future is what happens when leaders of different industries start to coordinate their IT with each other. Like phone firms are now coordinating with car firms. Such firms might merge to encourage their synergies. They we might have single firms as big winning leaders in larger economic sectors.

GD Star Rating
loading...
Tagged as: , ,

Dalio’s Principles

When I write and talk about hidden motives, many respond by asking how they could be more honest about their motives. I usually emphasize that we have limited budgets for honesty, and that it is much harder to be honest about yourself than others. And it is especially hard to be honest about the life areas that are the most sacred to you. But some people insist on trying to be very honest, and our book can make them unhappy when they see just how far they have to go.

It is probably easier to be honest if you have community support for honesty. And that makes it interesting to study the few groups who have gone the furthest in trying to create such community support. An interesting example is the hedge fund Bridgewater, as described in Dalio’s book Principles:

An idea meritocracy where people can speak up and say what they really think. (more)

#1 New York Times Bestseller … Ray Dalio, one of the world’s most successful investors and entrepreneurs, shares the unconventional principles that he’s developed, refined, and used over the past forty years to create unique results in both life and business—and which any person or organization can adopt to help achieve their goals. … Bridgewater has made more money for its clients than any other hedge fund in history and grown into the fifth most important private company in the United States. … Along the way, Dalio discovered a set of unique principles that have led to Bridgewater’s exceptionally effective culture. … It is these principles … that he believes are the reason behind his success. … are built around his cornerstones of “radical truth” and “radical transparency,” … “baseball cards” for all employees that distill their strengths and weaknesses, and employing computerized decision-making systems to make believability-weighted decisions. (more)

This book seems useful if you were the absolute undisputed ruler of a firm, so that you could push a culture of your choice and fire anyone who seems to resist. And were successful enough to have crowds eager to join, even after you’d fired many. And didn’t need to coordinate strongly with customers, suppliers, investors, and complementors. Which I guess applies to Dalio.

But he has little advice to offer those who don’t sit in an organization or social network that consistently rewards “radical truth.” He offers no help in thinking about how to trade honesty against the others things your social contexts will demand of you. Dalio repeatedly encourages honesty, but he admits that it is often painful, and that many aren’t suited for it. He mainly just says to push through the pain, and get rid of people who resist it, and says that these big visible up-front costs will all be worth it in the long run.

Dalio also seems to equate conflict and negative opinions with honesty. That is, he seeks a culture where people can say things that others would rather not hear, but doesn’t seem to consider that such negative opinions need not be “honest” opinions. The book makes hundreds of claims, but doesn’t cite outside sources, nor compare itself to other writings on the subject. Dalio doesn’t point to particular evidence in support of particular claims, nor give them any differing degrees of confidence, nor credit particular people as the source of particular claims. It is all just stuff he’s all sure of, that he endorses, all supported by the evidence of his firm’s success.

I can believe that the firm Bridgewater is full of open conflict, with negative opinions being frequently and directly expressed. And it would be interesting to study social behavior in such a context. I accept that this firm functions doing things this way. But I can’t tell if it succeeds because of or in spite of this open conflict. Yes this firm succeeds, but then so do many others with very different cultures. The fact that the top guy seems pretty self-absorbed and not very aware of the questions others are likely to ask of his book is not a good sign.

But if its a bad sign its not much of one; plenty of self-absorbed people have built many wonderful things. What he has helped to build might in fact be wonderful. Its just too bad that we can’t tell much about that from his book.

GD Star Rating
loading...
Tagged as: ,