Divorcing Tax Career Agents

I love the tax career agent idea, because it seems so simple, seems to have no losers, and doesn’t seem to step on any taboos.

The basic idea is this: If the government auctions off the right to get the income tax revenue paid by taxpayer X diverted to the auction winner, then that winner becomes a career agent. Such an agent has an incentive to advise and promote X, just as career agents do today in acting, music, sports, etc. Such auctions are a direct substitute for government selling debt, which governments already do to convert future taxes into present revenue. Furthermore, competitive auctions will make sure the government gets paid its full value, and would also tend to pick auction winners who are actually better able to advise and promote clients X.

At the moment the government is your tax career agent, and they are doing a terrible job; they do absolutely nothing. So selling the role to someone else is a net gain if agents on average find it their interest to do some advising or promoting, and if their audiences aren’t on averaged fooled by that advice and promotion. At least if the value agents get by doing so is a lot bigger than the simple transaction costs of setting up the auction and diverting the payments, and such costs that can be made very low. Agents need not help most clients, but even if they only help 5% of clients that’s still an improvement over the status quo.

Now economic theorists will worry about transaction costs due to asymmetric information; what if some auction bidders know more than others about a taxpayer’s chances of future income? In that case the auction winner might get “information rents”, and so not have to pay full price, cheating the government out of a bit of their revenue. A simple solution to this is to do the auction very early in the life of taxpayer X, such as at birth. Then few know much about X’s future income, and so info rents are almost zero.

However, a much bigger problem is that many people find it creepy to think of someone they didn’t pick trying to influence their life. Even though such agents want to help them make more money, and have no official powers whatsoever to influence their behaviors; they can only advise and promote to audiences who will listen to them. Even so, many worry tax career agents will do illegal things, or exert influence malicious via gossip or bribes. So it seems best to modify the proposal to placate these worriers, even if that less placates economic theorists. So we need to find ways to help people feel more in control of their tax career agents.

Now if the tax authority that sets this up whole thing has a limited geographic scope, like a state or nation, then taxpayer X can always threaten to leave that region, and completely destroy the value of the asset that the agent has purchased. Furthermore, if X simply makes it clear that they won’t listen at all to their current career agent, but would listen to other possible agents who might fill that role, that should create an incentive to sell for their existing unloved agent to “divorce” them by selling their asset to one of those other agents.

These do seem to be substantial powers of X over their agent. But let’s give taxpayers even more powers, For example, maybe the tax career agent isn’t even created until taxpayer X explicitly asks for it. And maybe X can choose a limited time duration to be auctioned off. Allowing them to decide whether to authorize another auction latter when that first time period runs out.

Furthermore, we could more strongly encourage an unloved tax career agent to sell their role to someone else via setting a self-set (“Harberger”) tax on the asset that the tax career agent holds. Just as self-set taxes on physical property encourage transfer of property to those better above to gain value from it.

Finally, we might authorize taxpayer X to give a 1% discount on this self-set property tax to any owner of their choosing. This would give a further push for a unloved agent to sell their asset to a better loved agent. And yet not much threaten government revenue.

All of this seems to me to give taxpayer X plenty of powers to discipline any disliked tax career agent. Thus allowing people to feel less creeped out by this idea. All while having governments suffer only modest info rents that cut their auction revenue.

This poll says the ideal self-set tax cuts the asset value by 7%. And this poll suggests that 70% support tax career agents; I might do a more nationally representative poll to see how well that generalizes.

GD Star Rating
Tagged as:

Values Are Facts

Humanity has developed many rich and powerful tools for thinking about “facts”. Our estimates of facts are rich and deeply connected, allowing us to learn much about each fact from all the others. Furthermore, we have many kinds of specialists who deal with particular kinds of facts.

However, we also care about “values”, and many say that our ways to think about facts just don’t work for values. They call attention to a “fact-value distinction.” And they develop whole separate ways to discuss values, and whole separate kinds of specialists, with value ways and specialists not much connected to their versions for facts.

That seems a mistake to me, as “values” are just another kind of “facts”. They may be an unusually difficult kinds of facts to think about, and there may be specialized tools appropriate for them. But due to values being facts, we should be able to use their connections to other facts to learn much about them. Our estimates of values should be well connected with and integrated into our tools and views about other related facts. Just as we do with other facts.

In “facts” I include everything we might say about the true arrangement of our physical world. Such as where each particle (e.g.,, electron, quark, or photon) is at each point in time. And its spin, momentum, etc. Not just at the present, but also the past and the future. Everywhere in spacetime, in fact. (And of course the shape of spacetime and the full quantum state of all of this.)

“Facts” also include all our observations and data. We have many standard tools for drawing inferences about our observations from physical arrangements, and vice versa, and also for estimating joint estimates regarding both. This topic area also includes “indexical” facts, about the mappings between observations and physical arrangements.

“Facts” also include all counterfactuals about other facts. That is, what fact would instead have been true if other facts had been different than they actually are, were, or will be. Our best theories of how the world works usually tell us how to predict counterfactuals given actual facts.

Finally, “facts” include fits to abstractions of all of the above. For example, “temperature at a point” is an abstraction, not implied by any exact particular arrangement of particles. But we have good standard ways to estimate that abstraction from such particle arrangements. Other physical abstractions include “molecules”, “planets”, “plasmas”, and “explosions”. And to the extent that we agree on how to fit such abstraction parameter estimates to actual arrangements, we can treat claims about these parameters as facts.

“Creatures” are also abstractions, and so we have many facts about them, including their locations, movements, and actions. For example, “attack” is a creature abstraction that we can usually fit to physical arrangements, allowing us to talk about who will, did, or might attack whom.

Another quite useful creature abstraction is “expected utility”, whereby we describe a set of creature actions via a set of state-dependent utility numbers, and also a set of state-dependent belief numbers, beliefs which are updated over time with the info contained in the observations of that creature. Even for creatures whose actions only approximately satisfy the axions of expected utility theory, we can often find a useful best fit to their actions using this framework. And thus talk about such a creature’s utilities and beliefs that way.

We have many other related decision frameworks that differ from expected utility theory. But almost all of them have analogies to info, beliefs, and utilities. Thus by fitting any of these decision abstractions to the actions of a particular creature, we can talk about the facts of a creature’s utilities, beliefs, and info. In this way there can be facts about each creature’s utilities, and thus about this one kind of “value.”

Some creatures, like humans, often have thoughts and words not only about ordinary facts, but also about their and others’ actions. (Thoughts can also be included in the category actions.) Such thoughts can be about what they or others have done or will do, or about what they might do given counterfactual assumptions. These thoughts can be not just predictions about such actions, but also various attitudes toward those actions, such surprise, identity, wariness, or approval.

Thought attitudes that fit some possible actions more than others can be combined with a decision framework to find the best fit utilities associated with such thoughts about actions. This produces facts about a different kind of creature “values”: the values associated with particular attitudes regarding some possible actions, instead of the values associated with actual actions.

One especially useful abstraction about human attitudes regarding actions is “social norms”, whereby human communities encourage and discourage the actions of they and others, via approval or disapproval. Such norm-based action approval can also be combined with a decision framework to produce a third kind of “value”: normative value. What some community says that a creature “should” think or do.

We thus have three kinds of “values” that can be described as “facts”: the values associated with actual (including future and counterfactual) actions, the values associated with thought attitudes toward actions, and the “normative” values associated with a community’s social norm disapproval of actions.

If you say “no, I don’t mean any of those kinds of values, I mean true real values, but I have ways at all to connect these true values to these other kinds of values which are facts”, well then I’m just not sure what you could possibly mean. If you say “I’m talking about which acts we might agree are actually good”, that looks to me a lot like a particular kind of thought attitude toward such acts.

Being facts, our views regarding all of these kinds of value facts should be integrated with our views regarding all the other connected facts. For example, imagine that we have views about how much actions, or attitudes toward actions, of different kinds of creatures are likely to be correlated with each other, controlling for context. Surely these between-creature correlations would influence our beliefs about the values of any one of them, including ourselves, given beliefs about the others. Similarly, when we have beliefs about correlations between attitudes about actions and the actions themselves, for the same creature, that connection should induce each of these to be influenced by our beliefs about the other.

The key point here is that values are facts, facts connected to each other and to other facts via the many varied and dense connections typical of the connections between all facts. Once we see this, we can realize that we have many useful strategies for inferring values. So, for example, to figure out which of your actions you might approve, you have available to you many other methods beyond directly consulting your intuitions about those specific actions, and asking those intuitions to approve or disapprove.

You can instead ask your intuition to approve or not of many other related actions. Or consider how your intuitions would likely change in counterfactual scenarios. Or consider the intuitions of others re related acts, not just here and now but all around the world and all through history. You can also look at all of your other actual actions, and your counterfactual actions. You can even consider what you know about the architecture of your mind, and about the cultural and biological history that produced it.

Your values are facts, in general facts are connected to many other related facts, and these many dense connections typically allow us to learn much about each fact. So you can learn a great deal about any one value by considering all of these other kinds of related facts, including value facts. Values need not sit in a mysterious separate sacred realm.

GD Star Rating
Tagged as:

Beware Mob War Strategy

The game theory is clear: it can be in your interest to make threats that it would not be in your interest to carry out. So you can gain from committing to carrying out such threats. But only if you do it right. Your commitment plan must be simple and clear enough for your audience to see when it applies to them, how it is their interest to go along with it, and that people who look like you to them have in fact been consistently following such a plan.

So, for example, it probably won’t work to just lash out at whomever happens to be near you whenever the universe disappoints you somehow. The universe may reorganize to avoid your lashings, but probably not by catering to your every whim. More likely, others will avoid you, or crush you. That’s a bad commitment plan.

Here’s a good commitment plan. A well-run legal system can usefully deter crime via committing to consistently punish law violations. Such a system clearly defines violations, and shows potential violators an enforcement system wherein a substantial fraction of violations will be detected, prosecuted, and punished. Those under the jurisdiction of this law can see this fact, and understand which acts lead to which punishments. Such acts can thus be deterred.

Here’s another pretty good commitment plan. The main nations with nuclear weapons seem to have created a mutual expectation of “mutually assured destruction.” Each nation is committed to responding to a nuclear attack with a devastating symmetric attack. So devastating as to deter attack even if there is a substantial chance that such a response wouldn’t happen. This commitment plan is simple, easy to understand, clearly communicated, and quite focused on particular scenarios. So far, it seems to have worked.

Humans are often willing to suffer large costs to punish those who violate their moral rules. In fact, we probably evolved such moral indignation in part as a way to commit to punishing violations of our local moral norms. In small bands, with norms that were stable across many generations, members could plausibly achieve sufficient clarity and certainty about norm enforcement to deter violations via such threats. So such commitments might have had good plans in that context.

But this does not imply that things would typically go well for us if we freely indulged our moral indignation inclinations in our complex modern world. For example, imagine that we encouraged, instead of discouraged, mob justice. That is, if we encouraged people to gossip to convince their friends to share their moral outrange, building off of each until they chased down and “lynched” any who offended them.

This sort of mob justice can go badly for a great many reasons. We don’t actually share norms as closely as we think, mob members are often more eager to show loyalty to each other than to verify accusation accuracy, and some are willing to make misleading accusations to take down rivals. More fundamentally, we might say that mob justice goes bad because it is not based on a good commitment plan. Observers just can’t predict mob justice outcomes well enough for it to usefully encourage good behavior, at least compared to a formal legal system.

Now consider the subject of making peace deals to end wars. Such as the current war between Russia and Ukraine. An awful lot of people, probably a majority, of the Ukrainian supporters I’ve heard from seem to be morally offended by the idea of such a peace deal in this case. Even though the usual game theory analyses of war say that there are usually peace deals that both sides would prefer at the time to continued war. (Such deals could focus on immediately verifiable terms; they needn’t focus on unverifiable promises of future actions. In April 2022 Russia and Ukraine apparently had a tentative deal, scuttled due to pressure from Ukrainian allies.)

Many of these peace deal opponents are willing to justify this stance in consequentialist terms: they say that we should commit to not making such deals. Which, as they are eager to point out, is a logically coherent stance due to the usual game theory analysis. We should thus “hold firm”, “teach them a lesson”, “don’t let them get away with it”, etc. All justified by game theory, they say.

The problem is, I haven’t seen anyone outline anything close to a good commitment plan here. Nothing remotely as clear and simple as we have with criminal law, or with mutually assured destruction. They don’t clearly specify the set of situations where the commitment is to apply, the ways observers are to tell when they are in such situations, the behavior that has been committed to there, or the dataset of international events that shows that people that look like us have in fact consistently behaved in this way. Peace deal opponents (sometimes called “war mongers”) instead mainly just seem to point to their mob-inflamed feelings of moral outrage.

For example, some talk as if we should just ignore the fact that Russia has nuclear weapons in this war, as if we have somehow committed to doing that in order to prevent anyone from using nuclear weapons as a negotiating leverage. The claim that nations have been acting according to such a commitment doesn’t seem to me at all a good summary of the history of nuclear powers. And if the claim is that we should start now to create such a commitment by just acting as if it had always existed, that seems even crazier.

If we have not actually found and clearly implemented a good commitment plan, then it seems to me that we should proceed as if we have not made such a commitment. So we must act in accord with the usual game theory analysis. Which says to compromise and make peace if possible. Especially as a way to reduce the risk of a large nuclear war.

The possibility of a global nuclear war seems a very big deal. Yes, war seems sacred and that inclines us toward relying on our intuitions instead of conscious calculations. It inclines us toward mob war strategy. But this issue seems plenty important enough to justify our resisting that inclination. Yes, a careful analysis may well identify some good commitment plans, after which we could think about how to move toward making commitments according to those plans.

But following the vague war strategy inclinations of our mob-inflamed moral outrage seems a poor substitute for such a good plan. If we have not yet actually found and implemented a good plan, we should deal with a world where we have not made useful commitments. And so make peace, to avoid risking the destructions of war.

GD Star Rating
Tagged as: , , ,

Beware Profane Priests

We humans evolved a way to take some of the things that are important to us, and bind our groups together by seeing those things as “sacred”. That is, by seeing them in the same way, via always seeing them from a distance. Such things are seen more abstractly and intuitively, with less conscious calculation, and less attending to details. Sacred things are idealized, and not to be mixed with or traded off against other things. Sacred thinking can be less competent, but induces more effort, and can keep us from being overwhelmed by strong passions.

Let us call the experts associated with a sacred area “priests”. The possibility of priests raises two issues for the sacred. First, if ordinary people saw a sacred area as one where they could personally gain expertise, and where they need to think to judge the relative expertise of others, this would seem to induce conscious calculation about the details of this sacred topic, which is a no-no. Second, those who are most expert would think a lot about the topic, and often see it up close, which would make it harder for them to see it a sacred.

Humans seem to solve the first issue by treating all sacred topics as being at one of two extremes. At one extreme, e.g., medicine, there are highly expert sacred priests, which they rest of us are not to second guess nor evaluate. At the other extreme, e.g., politics or friendship, expertise via thinking is seen as not possible, making everyone’s opinions nearly as good as anyone’s opinions. In neither case does thinking help ordinary people much, either to form opinions or to choose experts.

On the second issue, experts who only rarely directly confront the most sacred versions of their subject up close, like soldiers, police, or doctors, can drill and practice in a far mode, so that they can perform well intuitively and without much thought in the rare big stakes cases. But what about the other priests, who confront their sacred subjects more often?

When we think about this question in a sacred mode, intuitively and using a few abstract associations, our minds usually conclude that as the sacred is good and ideal, contact with it makes people more good and ideal. Thus we can trust priests to act in our collective interest. But the norm that the rest of us are not to judge such experts, and are to defer strongly to their judgement, gives them a lot of collective discretion. And it seems to me that near mode engagement with the topic means we can’t count much on their reverence for it to restrain them from from using their discretion for selfish advantage.

Thus in fact priests will often act profanely, a fact that the rest of us are often unwilling to see. Beware profane priests.

Added 26Sep: Someone suggested we trust experts on the sacred due to their sacrificing more to take such jobs, I did 16 Twitter polls on 16 kinds of jobs. Here are median estimates of “% of value which workers of that type sacrifice on average to do their job”:

GD Star Rating
Tagged as: ,

Sacred Inquiry

The reason I first started to study the sacred was that “sacred cows” kept getting in my way; our treating things as sacred often blocks sensible reforms. But now that I have a plausible theory of how and why we treat some things as sacred, I have to admit: I too treat some things as sacred. Maybe I should learn to stop that, but it seems hard. So perhaps we should accept the sacred as a permanent feature of human thought, and instead try to change what we see as how sacred or how exactly we do that.

So it seems worth my trying to describe in more detail how I see something as sacred, not just habitually but even after I notice this fact. In this post, that thing will be: intellectual inquiry. In this post I’ll mostly try to describe how I revere this, and not so much ask whether I should.

All the thinking and talking that happens in the world helps us to do many things, and to figure out many things. And while some of those things are pretty concrete and content-dependent, others are less so, helping us to learn more general stuff whose usefulness plausibly extends further into the future. And this I call “intellectual progress”.

In general, all of the thinking and talking that we do contributes to this progress, even though it is done for a wide variety of motives, and via many different forms of social organization. I should welcome and celebrate it all. And while abstractly, I do, I notice that, emotionally, I don’t.

It seems that I instead deeply want to distinguish and revere a particular more sacred sort of thinking and talking from the rest. And instead of assuming that my favored type is just very rare, hardly of interest to anyone but me, I instead presume that a great many of us are trying to produce my favored type, even if most fail at it. Which can let me presume that most must know how to do better, and thus justify my indignant stance toward those who fail to meet my standards.

This sort of thinking and talking that I revere is that which actually achieves substantial and valuable progress in abstract understanding, and is done in a way to effectively and primarily achieve this goal. Thus I see as “profane” work that appears to be greatly influenced by other purposes, such as showing off one’s impressiveness, or assuring associates of loyalty.

That is, I have a sacred purity norm, where I don’t like my pure sacred stuff mixed up with other stuff. Good stuff not only has to achieve good outcomes, it also has to be done the right way for the right reasons. I tend to simplify this category and its boundary, and presume that it can be distinguished clearly. I feel bound to others who share my norms, even if I can’t actually name any of them. I don’t calculate most of this; it instead comes intuitively, and seems aesthetically elegant. And I can’t recall ever choosing all this; it feels like I was always this way.

Now on reflection this has a lot of specific implications re what I find more sacred or profane, as I have a lot of beliefs about which intellectual topics are more valuable, and what are more effective methods. And I’ll get to those soon here.

But first let me note that while many intellectuals also see their professional realm as sacred, and have many similar sacred norms about how their work should be done, most of them don’t apply such norms nearly as strongly to their personal lives. In contrast, I extend this to all my thinking and talking. That is, while I’m okay with engaging in many kinds of thinking and talking, I want to sharply distinguish some sacred versions, where all these sacred norms apply, and try to actually use them often in my personal life.

Okay, I can think of a lot of specific implications this has for what I respect and criticize. The following is a somewhat random list of what occurs to me at the moment.

For example, I take academic papers to be implicitly claiming to promote intellectual progress. This implies that they should try to be widely available for others to critique and build on. So I dislike papers that are less available, or that use needlessly difficult languages or styles. Or that aren’t as forthcoming or concise as they could re what theses they argue, to allow readers to judge interest on that basis. I dislike intentional use of vague terms when clearer terms were available, and switching between word meanings to elude criticism.

I feel that a paper which cites another is claiming that it got some particular key input from that other paper, and a paper that cites nothing is claiming to have not needed such inputs. So I disapprove of papers that fail to cite key inputs, or that substitute a more prestigious source for the less prestigious source from which they actually got their input.

I see a paper on a topic as implicitly claiming that the topic is some rough approximation to the best topic they could have chosen, and a paper using a method that some rough approximation to the best method. So it bothers me when it seems obvious the topic isn’t so good, or when the method seems poorly chosen. I’m also bothered when the length of some writing seems poorly matched to the thesis presented. For example, if a thesis could have been adequately argued in a paper, then I’m bothered if its in a book with lots of tangential stuff added on to fill out the space.

I find it profane when authors seem to be pushing an agenda via selective choice of arguments, evidence, or terminology. They should acknowledge weak points and rebuttals of which they are aware without making readers or critics find them. I dislike when authors form mutual admiration societies designed to praise each other regardless of the quality of particular items. That is, I find the embrace of bias profane. Which maybe shouldn’t be too surprising given my blog name.

Now I have to admit that it isn’t clear how effective are these stances toward promoting this sacred goal of mine. While they might happen to help, it seems more plausible that they result more from a habit of treating this area as sacred, rather than from some careful calculation of their effects on intellectual progress. So it remains for me to reconsider my sacred stances in light of this criticism.

GD Star Rating
Tagged as: ,

Evaluating “Feminism”

My close friend and colleague Bryan Caplan has a new book, “Don’t Be a Feminist”. In general, I’m reluctant to embrace or oppose vague political slogan terms like “feminist”, preferring instead to stick to terms that are better defined. But I accept that his definition isn’t greatly wide of how I’ve seen the term usually used:

Feminism is the view that society generally treats men more fairly than women.

His summary assessment on fairness is:

What then is the big picture? The fairness of the treatment that men and women receive in our society is remarkably equal. And if there is a disparity, it is probably in women’s favor. This is especially true if we ponder one last gender gap: Men endure far more false accusations of unfairness than women do.

Caplan’s essay seems to reasonably summarize what we know about the ways in which men and women are favored or not, and I agree that over all things look roughly equal. I’m more skeptical that including false accusations against men changes this overall assessment; I’d say we still don’t know which side is favored more overall. And given how close things seem, I find it hard to care much about the overall sign.

Here’s another key Caplan claim:

Feminism is so rhetorically dominant that critics fear opening their mouths. … Most intellectual movements make an effort to distinguish wrong-doers from bystanders. … Feminist thinkers, in contrast, routinely and self-righteously do otherwise. … Most self-identified feminists are probably just regular people … Unfortunately, most vocal feminists are fanatics – and rank-and-file feminists tend to defer to them.

Here I also mostly agree, and can in fact attest via personal experience. Most of my “cancellation” (which has substantially harmed my career) has been due to people who saw themselves as feminists aggressively misinterpreting a few neutral things I said as anti-feminist, and most observers going along with that move. A great many have disagree with me over the years, but few others have treated me this way.

Caplan didn’t directly address what I see as the most common “feminist” issue raised: is it okay to have, and act on, gender-conditional expectations about behavior? Seems to me that this is okay when such expectations are based primarily on observed behaviors. This implies that it can be okay to have gender roles, if these result from gendered expectations.

Yes, one should be open to the possibility of seeing outlier cases, of behaviors changing with time or context, and that gender-behavior correlations might result from gendered-expectations. That is, we should look out for ways to change our matching sets of behaviors and expectations. Which is to say, we should look for ways to switch to superior game theory equilibria.

But that needn’t require us denying observed facts about behaviors in the equilibria that we have seen so far. Bryan is roughly right, both on the overall balance of gender unfairness, and on feminist rhetorical aggression.

GD Star Rating
Tagged as: ,

Hail Industrial Organization


Economists know many useful things about human social behavior, and about how to improve it. And the world would probably be better off if it listened to economists more. But while the world respects economists enough to mention when their analyses support favored policies, people are much less interested in deciding what to favor based on econ analyses. What could get people to listen more?

There are many relevant factors, but a big one where we might do better is: a track record for being useful. For example, the world listens to chemists, computer scientists, and engineers in part because of their widely-known reputations for having long track records of being directly and simply useful to diverse clients.

Yes, econ majors in college are among the best paid outside of computers and engineering. But that may only show that learning our methods is an impressive feat, not that we produce reliable results. And the fact that people like to point to our analyses to support their policies only shows that we have prestige, not that we are right. What we want is a track record of being, not just impressive, but directly and clearly right, and useful because of that.

Now it turns out that we economists have actually found a way to be frequently and directly useful to diverse clients, and via being right, not just impressive. But we’ve failed to claim sufficient credit for this, and now we seem to be dropping the ball in pursuing it. This place is: business strategy.

When a firm considers what products or services to make, what customers to seek and how, and what prices to charge, it can help to have a theory of that firm’s industry. A theory of its customer demands and producer costs. A theory that says who wants what, who can take what actions when, who knows what when doing what, and how each actor tends to respond to their expectations re other actions. With such a theory, one can predict which actions might be how profitable, and choose accordingly.

Firms today regularly debate key business choices, and hire management consultants to advise those decisions. In addition, new firms pitch their plans to investors, and frequently revise such plans. And while all these choices might seem to be done without theories, that is an illusion. In fact, all such analyses are based on at least implicit theories of how local industries work. Such theories might be simple, or wrong, but they are there.

Now many aspects of useful industry theories are quite context dependent. But other aspects are more general. There are in fact many common patterns in key industry features, and in the ways that industries compete. And in the last century, the world has made great progress in developing better general theories of how firms compete in industries. Furthermore, economics has been central to that story.

In particular, game theory has become a robust general account of how social decisions are made. And we’ve identified dozens of key factors that influence industrial competition. Key ways in which industries differ, that result in different styles of competition. And we’ve worked out a great many specific models of how small sets of these factors work together to create distinctive patterns of industry competition. And much, perhaps even most, of this has happened within the econ field of “industrial organization.”

Today, most who discuss business strategy do so using concepts and distinctions that are well integrated into this rich well-developed and useful econ account of how firms in industries compete. And firms are in fact constantly reconsidering their business strategies using such concepts. So we economists have in fact developed powerful tools that are very useful, and are widely being used.

But, alas, we economists are failing to take credit for it. We don’t teach courses in business strategy, and we don’t recommend students who take our industrial organization courses for such roles. We’ve instead allowed business schools to do that teaching, and to take that credit. And even to take most of the consulting gigs.

Furthermore, academic economists have drifted away from industrial organizations; it is no longer in fashion. It mostly uses old fashion game theory, instead of now popular behaviorism or machine learning. It isn’t well suited for controlled experiments, which are so much the fashion in econ these days that all other kinds of data are considered unclean. And it doesn’t give many chances to promote woke agendas. So few people publish in industrial organization, and few students take classes in it. I know, as I still teach it, but to few students, and nearby universities don’t even offer it.

As usual, academic research priorities are mostly set by internal coalition politics, not by what would be good for the world as a whole, or even each field as a whole.

GD Star Rating
Tagged as: , ,

Sacred Distance Hides Motives

My book with Kevin Simler describes many hidden human motives, common in our everyday lives. But that raises the question: how exactly can we humans hide our motives from ourselves?

Consider that we humans are constantly watching and testing our and others’ words and deeds for inconsistency, incoherence, and hypocrisy. As our rivals are eager to point out such flaws, we each try to adjust our words and deeds to cut and smooth the flaws we notice. Furthermore, we habitually adjust our words and deeds to match those of our associates, to make remaining flaws be shared flaws. After a lifetime of such smoothing, how could much personal incoherence remain?

One way to keep motives hidden is to hide your most questionable actions, those where you feel you least control or understand them. If you can’t hide such actions, then try not to make strong claims about related motives. And I think we do follow this strategy for our strongest feelings, such as lust, envy, or social anxiety. We often try to hide such feelings even from ourselves, and when we do notice them we often fall silent; we fear to speak on them.

How easy it is to check deeds and words for coherence depends in part on how dense and clear are their connections. And as all deeds are concrete, and as concrete words tend to be clearer and more densely connected, it seems easier to check concrete priorities, relative to abstract ones.

For example, it is easiest to check the motives that lead to our conscious, often written, calculations of detailed plans. Our time schedules, spatial routes and layouts, and our spending habits are often visible and full of details that make it hard to hide priorities. For example, if you go out of your way to drive past the home of your ex  on your way home from work, it will be hard to pretend you don’t care about her.

We have more room to maneuver, however, regarding our more hidden and infrequent concrete choices. And when we are all in denial in similar ways on similar topics, then we can all be reluctant to “throw stones” at our shared “glass houses”. This seems to apply to our hidden motives re schools and medicine, for example; we apparently all want to pretend together that school is for learning job skills and hospitals are for raising health.

Compared to our concrete priorities, our abstract priority expressions (e.g, “family is everything”) are less precise, and so are harder to check against each other for consistency. And abstract expressions can be even harder to check against concrete actions; large datasets of deeds may be required to check for such coherence.

We ground most abstract concepts, like “fire”, “sky”, “kid” or “sleep”, by reference to concrete examples with which we have had direct experience. So when we are confused about their usage, we can turn to those examples to get clear. But we ground other more “sacred” abstract concepts, like “love”, “justice”, or “capitalism”, more by reference to other abstract concepts. These are more like “floating abstractions.” And this habit makes it even harder to check our uses of such sacred concepts for coherence.

This potential of abstract concepts to allow more evasion of coherence checking is greatly enhanced by the fact that our brains have two rather different systems for thinking. First, our “near” system is designed to look at important-to-us close-up things, by attending to their details. This system is better integrated with our conscious thoughts. For example, we often first do a kind of calculation slowly and consciously, and then later by habit we learn to do such calculations unconsciously. This integration supports coherence checking, as we can respond to explicit challenges by temporarily returning to conscious calculation, to find explanations for our choices.

Our “far” system, in contrast, is designed to look at less-important-to-us far-away things, about which we usually know only a few more abstract descriptors. This system uses many opaque quick and dirty heuristics, including intuitive emotional and aesthetic associations, crude correlations, naive trust, and social approval. If someone else is using this system in their head to think about a topic, and then you use this system in your head to try to check their thinking, you will have a hard time judging much more than if your system gives the same answers as theirs. If you get different answers, it will be hard to say exactly why.

As our minds tend to invoke our far systems for thinking about more abstract topics, that makes it even harder to check abstract thoughts for coherence. But, you might respond, if that system is designed for dealing with relatively unimportant things, won’t the other near system get invoked for important topics, limiting this problem of being harder to check coherence to unimportant topics?

Alas, no, due to the sacred. Our sacred things are our especially important things, described via floating abstractions, where our norm is to think about them only using our far systems. We are not to calculate them, consider their details, or mix them with or trade them off against other things. Our intuitions there are sacred, and beyond question.

Making it hard to check the coherence of related deeds and words. The main thing we can do there is to intuit our own answer and compare it to others’ answers. If we get the same answers, that confirms that they share our sense of the sacred, and are from our in-group. If not, we can conclude they are from an out-group, and thus suspect; they didn’t learn the “right” sense of the sacred.

And that’s some of the ways that our minds tend to hide our motives, even given the widespread practice of trying to expose incoherence in rivals’ words and deeds. Floating abstractions help, and the sacred helps even more. And maybe we go further and coordinate to punish those who try to expose our sacred hypocrites.

Note that I’m not claiming that all these habits and structures were designed primarily for this effect of making it harder to check our words and deeds for coherence. I’m mainly pointing out that they have this effect.

GD Star Rating
Tagged as: ,

We See The Sacred From Afar, To See It Together

I’ve recently been trying to make sense of our concept of the “sacred”, by puzzling over its many correlates. And I think I’ve found a way to make more sense of it in terms of near-far (or “construal level”) theory, a framework that I’ve discussed here many times before.

When we look at a scene full of objects, a few of those objects are big and close up, while a lot more are small and far away. And the core idea of near-far is that it makes sense to put more mental energy into analyzing each object up close, objects that matters to us more, by paying more attention to their detail, detail often not available about stuff far away. And our brains do seem to be organized around this analysis principle.

That is, we do tend to think less, and think more abstractly, about things far from us in time, distance, social connection, or hypothetically. Furthermore, the more abstractly we think about something, the more distant we tend to assume are its many aspects. In fact, the more distant something is in any way, the more distant we tend to assume it is in other ways.

This all applies not just to dates, colors, sounds, shapes, sizes, and categories, but also to the goals and priorities we use to evaluate our plans and actions. We pay more attention to detailed complexities and feasibility constraints regarding actions that are closer to us, but for far away plans we are content to think about them more simply and abstractly, in terms of relatively general values and principles that depend less on context. And when we think about plans more abstractly, we tend to assume that those actions are further away and matter less to us.

Now consider some other ways in which it might make sense to simplify our evaluation of plans and actions where we care less. We might, for example, just follow our intuitions, instead of consciously analyzing our choices. Or we might just accept expert advice about what to do, and care little about experts incentives. If there are several relevant abstract considerations, we might assume they do not conflict, or just pick one of them, instead of trying to weigh multiple considerations against each other. We might simplify an abstract consideration from many parameters down to one factor, down to a few discrete options, or even all the way down to a simple binary split.

It turns out that all of these analysis styles are characteristic of the sacred! We are not supposed to calculate the sacred, but just follow our feelings. We are to trust priests of the sacred more. Sacred things are presumed to not conflict with each other, and we are not to trade them off against other things. Sacred things are idealized in our minds, by simplifying them and neglecting their defects. And we often have sharp binary categories for sacred things; things are either sacred or not, and sacred things are not to be mixed with the non-sacred.

All of which leads me to suggest a theory of the sacred: when a group is united by valuing something highly, they value it in a style that is very abstract, having the features usually appropriate for quickly evaluating things relatively unimportant and far away. Even though this group in fact tries to value this sacred thing highly. Of course, depending on what they try to value, such attempts may have only limited success.

For example, my society (US) tries to value medicine sacredly. So ordinary people are reluctant to consciously analyze or question medical advice; they are instead to just trust its priests, namely doctors, without looking at doctor incentives or track records. Instead of thinking in terms of multiple dimensions of health, we boil it all down to a single health dimension, or even a binary of dead or alive.

Instead of seeing a continuum of cost-effectiveness of medical treatments, along which the rich would naturally go further, we want a binary of good vs bad treatments, where everyone should get the good ones no matter what their cost, and regardless of any other factors besides a diagnosis. We are not to make trades of non-sacred things for medicine, and we can’t quite believe it is ever necessary to trade medicine against other sacred things. Furthermore, we want there to be a sharp distinction between what is medicine and what is not medicine, and so we struggle to classify things like mental therapy or fresh food.

Okay, but if we see sacred things as especially important to us, why ever would we want to analyze them using styles that we usually apply to things that are far away and the least important to us? Well one theory might be that our brains find it hard to code each value in multiple ways, and so typically code our most important values as more abstracted ones, as we tend to apply them most often from a distance.

Maybe, but let me suggest another theory. When a group unites itself by sharing a key “sacred” value, then its members are especially eager to show each other that they value sacred things in the same way. However, when group members hear about and observe how an associate makes key sacred choices, they will naturally evaluate those choices from a distance. So each group member also wants to look at their own choices from afar, in order to see them in the same way that others will see them.

In this view, it is the fact groups tend to be united by sacred values that is key to explaining why they treat such values in the style usually appropriate for relatively unimportant things seen from far away, even though they actually want to value those things highly. Even though such a from-a-distance treatment will probably lead to a great many errors and misjudgments when actually trying to promote that thing.

You see, it may be more important to groups to pursue a sacred value together than to pursue it effectively. Such as the way the US spends 18% of GDP on medicine, as a costly signal of how sacred medicine is to us, even though the marginal health benefit of our medical spending seems to be near zero. And we show little interest in better institutions that could make such spending far more cost effective.

Because at least this way we all see each other’s ineffective medical choices in the same way. We agree on what to do. And after all, that’s the important thing about medicine, not whether we live or die.

Added 10Sep: Other dual process theories of brains give similar predictions.

GD Star Rating
Tagged as: , ,

Bizarre Accusations

Imagine that you planned a long hike through a remote area, and suggested that it might help to have an experienced hunter-gather along as a guide. Should listeners presume that you intend to imprison and enslave such guides to serve you? Or is it more plausible that you propose to hire such people as guides?

To me, hiring seems the obvious interpretation. But, to accuse me of advancing a racist slavery agenda, Audra Mitchell and Aadita Chaudhury make the opposite interpretation in their 2020 International Relations article “Worlding beyond ‘the’ ‘end’ of ‘the world’: white apocalyptic visions and BIPOC futurisms”.

In a chapter “Catastrophe, Social Collapse, and Human Extinction” in the 2008 book Global Catastrophic Risks I suggested that we might protect against human extinction by populating underground refuges with people skilled at surviving in a world without civilization:

A very small human population would mostly have to retrace the growth path of our human ancestors; one hundred people cannot support an industrial society today, and perhaps not even a farming society. They might have to start with hunting and gathering, until they could reach a scale where simple farming was feasible. And only when their farming population was large and dense enough could they consider returning to industry.

So it might make sense to stock a refuge with real hunter-gatherers and subsistence farmers, together with the tools they find useful. Of course such people would need to be disciplined enough to wait peacefully in the refuge until the time to emerge was right. Perhaps such people could be rotated periodically from a well protected region where they practiced simple lifestyles, so they could keep their skills fresh. And perhaps we should test our refuge concepts, isolating real people near them for long periods to see how well particular sorts of refuges actually perform at returning their inhabitants to a simple sustainable lifestyle.

On this basis, Mitchell and Chaudhury call me a “white futurist” and “American settler economist” seeking to preserve existing Euro-centric power structures:

Indeed, many contributors to ‘end of the world’ discourses offer strategies for the reconstruction and ‘improvement’ of existing power structures after a global catastrophe. For example, American settler economist Robin Hanson calculates that if 100 humans survived a global catastrophic disaster that killed all others, they could eventually move back through the ‘stages’ of ‘human’ development, returning to the ‘hunter-gatherer stage’ within 20,000 years and then ‘progressing’ from there to a condition equivalent to contemporary society (defined in Euro-centric terms). …

some white futurists express concerns about the ‘de-volution’ of ‘humanity’ from its perceived pinnacle in Euro-centric societies. For example, American settler economist Hanson describes the emergence of ‘humanity’ in terms of four ‘progressions’

And solely on the basis of my book chapter quote above, Mitchell and Chaudhury bizarrely claim that I “quite literally” suggest imprisoning and enslaving people of color “to enable the future re-generation of whiteness”:

To achieve such ideal futures, many writers in the ‘end of the world’ genre treat [black, indigenous, people of color] as instruments or objects of sacrifice. In a stunning display of white possessive logic, Hanson suggests that, in the face of global crisis, it

‘might make sense to stock a refuge with real hunter-gatherers and subsistence farmers, together with the tools they find useful. Of course, such people would need to be disciplined enough to wait peacefully in the refuge until the time to emerge was right.

In this imaginary, Hanson quite literally suggests the (re-/continuing)imprisonment, (re-/continuing)enslavement and biopolitical (re-/continuing) instrumentalization of living BIPOC in order to enable the future re-generation of whiteness. This echoes the dystopian nightmare world described in …

And this in a academic journal article that supposedly passed peer review! (I was not one of the “peers” consulted.)

To be very clear, I proposed to hire skilled foragers and subsistence farmers to serve in such roles, compensating them as needed to gain their consent. I didn’t much care about their race, nor about the race of the world that would result from their repopulating the world. And presumably someone with substantial racial motivations would in fact care more about that last part; how exactly does repopulating the world with people of color promote “whiteness”?

GD Star Rating
Tagged as: ,