Will War Return?

Usually, I don’t get worked up about local short term trends; I try to focus on global long term trends, which mostly look pretty good (at least until the next great era comes). But lately I’ve seen some worrying changes to big trends. For example, while for over a century IQ has risen and death rates have fallen, both steadily, in the last two decades IQ has stopped rising in most rich nations, and in the U.S. death rates have started rising. Economic growth also seems to have slowed, thought not stopped, world-wide.

Added to these are some worrisome long term trends. Global warming continues. Fertility has been falling for centuries. Rates of innovation per innovator have been falling greatly for perhaps a century. And since the end of the world wars, inequality and political polarization has been increasing.

One good-looking trend that hasn’t reversed lately is a falling rate of violence, via crime, civil war, and war between nations. But this graph of war deaths over the last 600 years makes me pause:

Yes, war death rates have fallen since the world wars, but those wars were a historical peak. And though the pattern is noisy, we seem to see a roughly half century cycle, a cycle that is perhaps increasing in magnitude. So we have to wonder: are we now near a war cycle nadir, with another war peak coming soon?

The stakes here are hard to exaggerate. If war is coming back soon, the next peak might make for record high death. And the easiest way to imagine achieving that is via nukes. If war may come back soon with a vengeance, we must consider preparing for that possibility.

Not only have we seen fewer war deaths since the world wars, we’ve also seen a great reduction in social support for military virtues, values, and investments. Compared to our ancestors, we glorify soldiers less, and less steel non-soldiers to sacrifice for war. (E.g., see They Shall Not Grow Old.) In contrast, ancient societies were in many ways organized around war, offering great status and support for warriors. They even supported soldiers raping, pillaging, exterminating, and enslaving enemies.

Yes, trying to create more local social support for war might well help create the next rise of war. Which could be a terrible thing. (Yes my even talking about this could help cause it, but even here I prioritize honesty.) However, if preparing more sooner for war helps nations to win or at least survive the next war peak, do you really want it to be only other nations who gain that advantage?

Given the stakes here, it seems terribly important to better understand the causes of the recent decline in war deaths. I’ve proposed a farmers-returning-to-foragers story, whose simplest version predicts a continuing decline. But I’m far from confident of that simplest version, which would not have predicted the world wars as a historical peak. Please fellow intellectuals, let’s figure this out!

GD Star Rating
loading...

Beware Nggwal

Consider the fact that this was a long standing social equilibrium:

During an undetermined time period preceding European contact, a gargantuan, humanoid spirit-God conquered parts of the Sepik region of Papua New Guinea. … Nggwal was the tutelary spirit for a number of Sepik horticulturalist societies, where males of various patriclans were united in elaborate cult systems including initiation grades and ritual secrecy, devoted to following the whims of this commanding entity. …

a way of maintaining the authority of the older men over the women and children; it is a system directed against the women and children, … In some tribes, a woman who accidentally sees the [costumed spirit or the sacred paraphernalia] is killed. … it is often the responsibility of the women to provide for his subsistence … During the [secret] cult’s feasts, it is the senior members who claim the mantle of Nggwal while consuming the pork for themselves. …

During the proper ritual seasons, Ilahita Arapesh men would wear [ritual masks/costumes], and personify various spirits. … move about begging small gifts of food, salt, tobacco or betelnut. They cannot speak, but indicate their wishes with various conventional gestures, …
Despite the playful, Halloween-like aspects of this practice … 10% of the male masks portrayed [violent spirits] , and they were associated with the commission of ritually sanctioned murder. These murders committed by the violent spirits were always attributed to Nggwal.

The costumes of the violent spirits would gain specific insignia after committing each killing, … “Word goes out that Nggwal has “swallowed” another victim; the killer remains technically anonymous, even though most Nggwal members know, or have a strong inkling of, his identity.” … are universally feared, and nothing can vacate a hamlet so quickly as one of these spooks materializing out of the gloom of the surrounding jungle. … Nggwal benefits some people at the expense of others. Individuals of the highest initiation level within the Tambaran cult have increased status for themselves and their respective clans, and they have exclusive access to the pork of the secret feasts that is ostensibly consumed by Nggwal. The women and children are dominated severely by Nggwal and the other Tambaran cult spirits, and the young male initiates must endure severe dysphoric rituals to rise within the cult. (more)

So in these societies, top members of secret societies could, by wearing certain masks, literally get away with murder. These societies weren’t lawless; had these men committed murder without the masks, they would have been prosecuted and punished.

Apparently many societies have had such divisions between an official legal system that was supposed to fairly punish anyone for hurting others, along side less visible but quite real systems whereby some elites could far more easily get away with murder. Has this actually been the usual case in history?

GD Star Rating
loading...
Tagged as: ,

Pay More For Results

A simple and robust way to get others to do useful things is to “pay for results”, i.e., to promise to make particular payments for particular measurable outcomes. The better the outcomes, the more someone gets paid. This approach has long been used in production piece-rates, worker bonuses, sales commissions, CEO incentive paylawyer contingency fees, sci-tech prizes, auctions, and outcome-contracts in PR, marketing, consulting, IT, medicine, charities, development, and in government contracting more generally. 

Browsing many articles on the topic, I mostly see either dispassionate analyses of its advantages and disadvantages, or passionate screeds warning against its evils, especially re sacred sectors like charity, government, law, and medicine. Clearly many see paying for results as risking too much greed, money, and markets in places where higher motives should reign supreme.

Which is too bad, as those higher motives are often missing, and paying for results has a lot of untapped potential. Even though the basic idea is old, we have yet to explore a great many possible variations. For example, many of social reforms that I’ve considered promising over the years can be framed as paying for results. For example, I’ve liked science prizes, combinatorial auctions, and:

  1. Buy health, not health careGet an insurer to sell you both life & health insurance, so that they lose a lot of money if you are disabled, in pain, or dead. Then if they pay for your medical expenses, you can trust them to trade those expenses well against lower harm chances.
  2. Fine-insure-bounty criminal law systemCatch criminals by paying a bounty to whomever proves that a particular person did a particular crime, require everyone to get crime insurance, have fines as the official punishment, and then let insurers and clients negotiate individual punishments, monitoring, freedoms, and co-liabilities. 
  3. Prediction & decision markets – There’s a current market probability, and if you buy at that price you expect to profit if you believe a higher probability. In this way you are paid to fix any error in our current probabilities, via winning your bets. We can use the resulting market prices to make many useful decisions, like firing CEOs. 

We have some good basic theory on paying for results. For example, paying your agents for results works better when you can measure the things that you want sooner and more accurately, when you are more risk-averse, and when your agents are less risk-averse. It is less less useful when you can watch your agents well, and you know what they should be doing to get good outcomes.

The worst case is when you are a big risk-neutral org with lots of relevant expertise who pays small risk-averse individuals or organizations, and when you can observe your agents well and know roughly what they should do to achieve good outcomes, outcomes that are too complex or hidden to measure. In this case you should just pay your agents to do things the right way, and ignore outcomes.

In contrast, the best case for paying for results is when you are more risk-averse than your agents, you can’t see much of what they do, you don’t know much about how they should act to best achieve good outcomes, and you have good fast measure of the outcomes you want. So this theory suggests that ordinary people trying to get relatively simple things from experts tend to be good situations for paying for results, especially when those experts can collect together into large more-risk-neutral organizations.

For example, when selling a house or a car, the main outcome you care about is the sale price, which is quite observable, and you don’t know much about how best to sell to future buyers. So for you a good system is to hold an auction and give it to the agent who offers the highest price. Then that agent can use their expertise to figure out how to best sell your item to someone who wants to use it.

While medicine is complex and can require great expertise, the main outcomes that you want from medicine are simple and relatively easy to measure. You want to be alive, able to do your usual things, and not in pain. (Yes, you also have a more hidden motive to show that you are willing to spend resources to help allies, but that is also easy to measure.) Which is why relatively simple ways to pay for health seem like they should work. 

Similarly, once we have defined a particular kind of crime, and have courts to rule on particular accusations, then we know a lot about what outcomes we want out of a crime system: we want less crime. If the process of trying to detect or punish a criminal could hurt third parties, then we want laws to discourage those effects. But with such laws in place, we can more directly pay to catch criminals, and to discourage the committing of crimes. 

Finally when we know well what events we are trying to predict, we can just pay people who predict them well, without needing to know much about their prediction strategies. Overall, paying for results seems to still have enormous untapped potential, and I’m doing my part to help that potential be realized.

GD Star Rating
loading...
Tagged as: ,

Why Age of Em Will Happen

In some technology competitions, winners dominate strongly. For example, while gravel may cover a lot of roads if we count by surface area, if we weigh by vehicle miles traveled then asphalt strongly dominates as a road material. Also, while some buildings are cooled via fans and very thick walls, the vast majority of buildings in rich and hot places use air-conditioning. In addition, current versions of software systems also tend to dominate over old older versions. (E.g., Windows 10 over Windows 8.)

However, in many other technology competitions, older technologies remain widely used over long periods. Cities were invented ten thousand years ago, yet today only about half of the population lives in them. Cars, trains, boats, and planes have taken over much transportation, yet we still do plenty of walking. Steel has replaced wood in many structures, yet wood is still widely used. Fur, wool, and cotton aren’t used as often as they once were, but they are still quite common as clothing materials. E-books are now quite popular, but paper books sales are still growing.

Whether or not an old tech still retains wide areas of substantial use depends on the average advantage of the new tech, relative to the variation of that advantage across the environments where these techs are used, and the variation within each tech category. All else equal, the wider the range of environments, and the more diverse is each tech category, the longer that old tech should remain in wide use.

For example, compare the set of techs that start with the letter A (like asphalt) to the set that start with the letter G (like gravel). As these are relatively arbitrary sets that do not “cut nature at its joints”, there is wide diversity within each category, and each set is all applied to a wide range of environments. This makes it quite unlikely that one of these sets will strongly dominate the other.

Note that techs that tend to dominate strongly, like asphalt, air-conditioning, and new software versions, more often appear as a lumpy change, e.g., all at once, rather than via a slow accumulation of many changes. That is, they more often result from one or a few key innovations, and have some simple essential commonality. In contrast, techs that have more internal variety and structure tend more to result from the accumulation of more smaller innovations.

Now consider the competition between humans and computers for mental work. Today human brains earn more than half of world income, far more than the costs of computer hardware and software. But over time, artificial hardware and software have been improving, and slowly commanding larger fractions. Eventually this could become a majority. And a key question is then: how quickly might computers come to dominate overwhelmingly, doing virtually all mental work?

On the one hand, the ranges here are truly enormous. We are talking about all mental work, which covers a very wide of environments. And not only do humans vary widely in abilities and inclinations, but computer systems seem to encompass an even wider range of designs and approaches. And many of these are quite complex systems. These facts together suggest that the older tech of human brains could last quite a long time (relative of course to relevant timescales) after computers came to do the majority of tasks (weighted by income), and that the change over that period could be relatively gradual.

For an analogy, consider the space of all possible non-mental work. While machines have surely been displacing humans for a long time in this area, we still do many important tasks “by hand”, and overall change has been pretty steady for a long time period. This change looked nothing like a single “general” machine taking over all the non-mental tasks all at once.

On the other hand, human minds are today stuck in old bio hardware that isn’t improving much, while artificial computer hardware has long been improving rapidly. Both these states, of hardware being stuck and improving fast, have been relatively uniform within each category and across environments. As a result, this hardware advantage might plausibly overwhelm software variety to make humans quickly lose most everywhere.

However, eventually brain emulations (i.e. “ems”) should be possible, after which artificial software would no longer have a hardware advantage over brain software; they would both have access to the same hardware. (As ems are an all-or-nothing tech that quite closely substitutes for humans and yet can have a huge hardware advantage, ems should displace most all humans over a short period.) At that point, the broad variety of mental task environments, and of approaches to both artificial and em software, suggests that ems many well stay competitive on many job tasks, and that this status might last a long time, with change being gradual.

Note also that as ems should soon become much cheaper than humans, the introduction of ems should initially cause a big reversion, wherein ems take back many of the mental job tasks that humans had recently lost to computers.

In January I posted a theoretical account that adds to this expectation. It explains why we should expect brain software to be a marvel of integration and abstraction, relative to the stronger reliance on modularity that we see in artificial software, a reliance that allows those systems to be smaller and faster built, but also causes them to rot faster. This account suggests that for a long time it would take unrealistically large investments for artificial software to learn to be as good as brain software on the tasks where brains excel.

A contrary view often expressed is that at some point someone will “invent” AGI (= Artificial General Intelligence). Not that society will eventually have broadly capable and thus general systems as a result of the world economy slowly collecting many specific tools and abilities over a long time. But that instead a particular research team somewhere will discover one or a few key insights that allow that team to quickly create a system that can do most all mental tasks much better than all the other systems, both human and artificial, in the world at that moment. This insight might quickly spread to other teams, or it might be hoarded to give this team great relative power.

Yes, under this sort of scenario it becomes more plausible that artificial software will either quickly displace humans on most all jobs, or do the same to ems if they exist at the time. But it is this scenario that I have repeatedly argued is pretty crazy. (Not impossible, but crazy enough that only a small minority should assume or explore it.) While the lumpiness of innovation that we’ve seen so far in computer science has been modest and not out of line with most other research fields, this crazy view postulates an enormously lumpy innovation, far out of line with anything we’ve seen in a long while. We have no good reason to believe that such a thing is at all likely.

If we presume that no one team will ever invent AGI, it becomes far more plausible that there will still be plenty of jobs tasks for ems to do, whenever ems show up. Even if working ems only collect 10% of world income soon after ems appear, the scenario I laid out in my book Age of Em is still pretty relevant. That scenario is actually pretty robust to such variations. As a result of thinking about these considerations, I’m now much more confident that the Age of Em will happen.

In Age of Em, I said:

Conditional on my key assumptions, I expect at least 30 percent of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 5 percent.

I now estimate an unconditional 80% chance of it being a useful guide, and so will happily take bets based on a 50-50 chance estimate. My claim is something like:

Within the first D econ doublings after ems are as cheap as the median human worker, there will be a period where >X% of world income is paid for em work. And during that period Age of Em will be a useful guide to that world.

Note that this analysis suggests that while the arrival of ems might cause a relatively sudden and disruptive transition, the improvement of other artificial software would likely be more gradual. While overall rates of growth and change should increase as a larger fraction of the means of production comes to be made in factories, the risk is low of a sudden AI advance relative to that overall rate of change. Those concerned about risks caused by AI changes can more reasonably wait until we see clearer signs of problems.

GD Star Rating
loading...
Tagged as: , , ,

Grabbing Now Versus Later

Today and yesterday’s Democratic debates suggests a big recent bump in tastes for regulation and redistribution, in order to lower the status of big business and the rich, and to spend more on the needy and worthy causes. South Korea, which I’ve just visited, sees a similar trend, as does Europe:

Europe’s mainstream parties are going back to the 1970s. In Germany, the U.K, Denmark, France and Spain, these parties are aiming to reverse decades of pro-market policy and promising greater state control of business and the economy, more welfare benefits, bigger pensions and higher taxes for corporations and the wealthy. Some have discussed nationalizations and expropriations. It could add up to the biggest shift in economic policy on the continent in decades. (more)

While I often hear arguments on the moral and economic wisdom of grabbing to redistribute, I rarely hear about the choice of whether to grab now versus later. The issues here are similar to those for the related choice in charity, of whether to give now versus later:

Then Robin Hanson of Overcoming Bias got up and just started Robin Hansonning at everybody. First he gave a long list of things that people could do to improve the effectiveness of their charitable donations. Then he declared that since almost no one does any of these, people don’t really care about charity, they’re just trying to look good. … he made some genuinely unsettling points.

One of his claims that generated the most controversy was that instead of donating money to charity, you should invest the money at compound interest, then donate it to charity later after your investment has paid off – preferably just before you die. … He said that the reason people didn’t do this was that they wanted the social benefits of having given money away, which are unavailable if you wait until just before you die to do so. And darn it, he was totally right. Not about the math – there are severe complications which I’ll bring up later – but about the psychology. (more)

Others … argue that giving now to help people who are sick or under-schooled creates future benefits that grow faster than ordinary growth rates. But … if real charity needs are just as strong in the future as today, then all we really need [for waiting to be better] are positive interest rates. (more)

You may be tempted to move resources from the rich and business profits to the poor and worthy projects, because you see business exploitation, you see low value in the rich buying mansions and yachts, you see others in great need, and you see great value in many worthy projects. But big business doesn’t actually exploit much, the consumption of the rich is less of real resources, and the rich tend to consume less relative to investing and donating.

So instead of grabbing stuff from the rich and businesses today, consider the option of waiting, to grab later. If you don’t grab stuff from them today, these actors will invest much of that stuff, producing a lot more stuff later. Yes, you might think some of your favorite projects are good investments, but let’s be honest; most of the stuff you grab won’t be invested, and the investments that do happen will be driven more by political than rate-of-return considerations. Furthermore, if you grab a lot today, news of that event will discourage future folks from generating stuff, and encourage those folks to move and hide it better.

Also, the rich put much of what they don’t invest into charity. And there’s good reason to think they do a decent job with their charity efforts. Most have impressive management abilities, access to skilled associates, and a willingness to take risks. And they can more effectively resist political pressures that typically mess up government-managed projects.

Finally, when the rich do spend money on themselves, much of that goes to paying for positional and status goods that generate much less in the way of real wastes. When they bid up the price of prestigious clubs, real estate, colleges, first-class seats, vanity books and conference talks, etc., real resources are transferred to those who get less prestigious versions. And our best model of status inequality says that allowing more of this doesn’t cause net harm.

So the longer you wait to grab from the rich, the longer they will grow wealth, donate it well, and transfer via status goods. Just as it is dangerous to borrow too much, because you may face big future crises, it can be unwise to grab from the rich today, when you could grow and farm them to create a crop available to harvest tomorrow. South Korea would have been much worse off doing big grabs in 1955, relative to waiting until today to grab.

Added 29June: Some people ask “wait how long?” One strategy would be to wait for a serious crisis. This is in fact when the rich have lost most of their wealth in history, in disasters like wars, pandemics, and civilization collapse. Another strategy would be to wait until there’s so much capital that market rates of return fall to very low levels.

GD Star Rating
loading...
Tagged as: , ,

Libertarian Varieties

Here at GMU Econ we tend to lean libertarian, but in a wide range of ways. For example, here are two recent posts by colleagues:

Don Boudreaux:

The economy is an emergent and dynamic order that was not, and could not possibly be, designed – and, hence, that cannot possibly be successfully engineered. … the economy is not a device or an organization with a purpose. It is, instead, the result of the multitude of interactions of hundreds of millions of diverse individual entities – persons, households, firms, and governments – each pursuing its own purposes. …

Competent intro-economics professors keep their aspirations modest. In my case, these are two. The first is to impress upon my students the full weight of the fact that the economy is an inconceivably complex order of interactions that cannot possibly be engineered. The second is to inspire students always to ask questions that too often go unasked – questions such as “From where will the resources come to provide that service?” “Why should Sam’s assessment of Sally’s choices be regarded more highly than Sally’s own assessment?” “What consequences beyond the obvious ones might result from that government action?” And, most importantly of all, “As compared to what?”

Students who successfully complete any well-taught economics course do not have their egos inflated with delusions that they can advise Leviathan to engineer improvements in society. Quite the opposite. But these students do emerge with the too-rare humility that marks those who understand that the best service they can offer is to ask penetrating and pertinent questions that are asked by almost no others. (more)

I’m a big fan of learning to ask good questions; it is great to be able to see puzzles, and to resist the temptation to explain them away too quickly. However, I’m less enamored of teaching people to “ask questions” when they are supposed to see certain answers as obvious.

And the fact that a system is complex doesn’t imply that one cannot usefully “engineer” connections to it. For example, the human body is complex, and yet we can usefully engineer our diets, views, clothes, furniture, air input/outputs, sanitation, and medical interventions.

Yes, most students are overly prone to endorse simple-minded policies with large side effects that they do not understand. But I attribute this less to a lack of awareness of complexity, and more to an eagerness to show values; they care less about the effects of polices than about the values they signal by supporting them. After all, people are also prone to offer overly simple-minded advise to the individual people around them, for similar reasons.

Dan Klein:

Government is a special sort of player in society; its initiations of coercion differ from those of criminals. Its coercions are overt, institutionalized, openly rationalized, even supported by a large portion of the public. They are called intervention or restriction or regulation or taxation, rather than extortion, assault, theft, or trespass. But such government interventions are still initiations of coercion. That’s important, because recognizing it helps to sustain a presumption against them, a presumption of liberty. CLs [= classical liberals] and libertarians think that many extant interventions do not, in fact, meet the burden of proof for overcoming the presumption. Many interventions should be rolled back, repealed, abolished.

Thus CLs and libertarians favor liberalizing social affairs. That goes as general presumption: For business, work, and trade, but also for guns and for “social” issues, such as drugs, sex, speech, and voluntary association.

CLs and libertarians favor smaller government. Government operations, such as schools, rely on taxes or privileges (and sometimes partially user fees). Even apart from the coercive nature of taxation, they don’t like the government’s playing such a large role in social affairs, for its unhealthy moral and cultural effects.

There are some libertarians, however, who have never seen an intervention that meets the burden of proof. They can be categorical in a way that CLs are not, believing in liberty as a sort of moral axiom. Sometimes libertarians ponder a pure-liberty destination. They can seem millenarian, radical, and rationalistic. …
But libertarian has also been used to describe a more pragmatic attitude situated in the status quo yet looking to liberalize, a directional tendency to augment liberty, even if reforms are small or moderate. (more)

Along with Dan, I only lean against government intervention; that presumption can be and is often overcome. But the concept of coercion isn’t very central to my presumption. At a basic level, I embrace the usual economists’ market failure analysis, preferring interventions that fix large market failures, relative to obvious to-be-expected government failures.

But at a meta level, I care more about having good feedback/learning/innovation processes. The main reason that I tend to be wary of government intervention is that it more often creates processes with low levels of adaptation and innovation regarding technology and individual preferences. Yes, in principle dissatisfied voters can elect politicians who promise particular reforms. But voters have quite limited spotlights of attention and must navigate long chains of accountability to detect and induce real lasting gains.

Yes, low-government mechanisms often also have big problems with adaptation and innovation, especially when customers mainly care about signaling things like loyalty, conformity, wealth, etc. Even so, the track record I see, at least for now, is that these failures have been less severe than comparable government failures. In this case, the devil we know more does in fact tend to be better that the devil we know less.

So when I try to design better social institutions, and to support the proposals of others, I’m less focused than many on assuring zero government invention, or on minimizing “coercion” however conceived, and more concerned to ensure healthy competition overall.

GD Star Rating
loading...
Tagged as: ,

We Agree On So Much

In a standard Bayesian model of beliefs, an agent starts out with a prior distribution over a set of possible states, and then updates to a new distribution, in principle using all the info that agent has ever acquired. Using this new distribution over possible states, this agent can in principle calculate new beliefs on any desired topic. 

Regarding their belief on a particular topic then, an agent’s current belief is the result of applying their info to update their prior belief on that topic. And using standard info theory, one can count the (non-negative) number of info bits that it took to create this new belief, relative to the prior belief.  (The exact formula is Sumi pi log2(pi/qi), where pi is the new belief, qi is the prior, and i ranges over possible answers to this topic question.)  

How much info an agent acquires on a topic is closely related to how confident they become on that topic. Unless a prior starts out very confident, high confidence later can only come via updating on a great many info bits. 

Humans typically acquire vast numbers of info bits over their lifetime. By one estimate, we are exposed to 34GB per day. Yes, as a practical matter we can’t remotely make full use of all this info, but we do use a lot of it, and so our beliefs do over time embody a lot of info. And even if our beliefs don’t reflect all our available info, we can still talk about the number of bits are embodied in any given level of confidence an agent has on a particular topic. 

On many topics of great interest to us, we acquire a huge volume of info, and so become very confident. For example, consider how confident you are at the moment about whether you are alive, whether the sun is shining, that you have ten fingers, etc. You are typically VERY confident about such things, because have access to a great many relevant bits.

On a great many other topics, however, we hardly know anything. Consider, for example, many details about the nearest alien species. Or even about the life of your ancestors ten generations back. On such topics, if we put in sufficient effort we may be able to muster many very weak clues, clues that can push our beliefs in one direction or another. But being weak, these clues don’t add up to much; our beliefs after considering such info aren’t that different from our previous beliefs. That is, on these topics we have less than one bit of info. 

Let us now collect a large broad set of such topics, and ask: what distribution should we expect to see over the number of bits per topic? This number must be positive, for many familiar topics it is much much larger than one, while for other large sets of topics, it is less than one. 

The distribution most commonly observed for numbers that must be positive yet range over many orders of magnitude is: lognormal. And so I suggest that we tentatively assume a (large-sigma) lognormal distribution over the number of info bits that an agent learns per topic. This may not be exactly right, but it should be qualitatively in the ballpark.  

One obvious implication of this assumption is: few topics have nearly one bit of info. That is, most topics are ones where either we hardly know anything, or where we know so much that we are very confident. 

Note that these typical topics are not worth much thought, discussion, or work to cut biases. For example, when making decisions to maximize expected utility, or when refining the contribution that probabilities on one topic make to other topic probabilities, getting 10% of one’s bits wrong just won’t make much of difference here. Changing 10% of 0.01 bit makes still leaves one’s probabilities very close to one’s prior. And changing 10% of a million bits still leaves one with very confident probabilities.  

Only when the number of bits on a topic is of order unity do one’s probabilities vary substantially with 10% of one’s bits. These are the topics where it can be worth paying a fixed cost per topic to refine one’s probabilities, either to help make a decision or to help update other probability estimates. And these are the topics where we tend to think, talk, argue, and worry about our biases.

It makes sense that we tend to focus on pondering such “talkable topics”, where such thought can most improve our estimates and decisions. But don’t let this fool you into thinking we hardly agree on anything. For the vast majority of topics, we agree either that we hardly know anything, or that we quite confidently know the answer. We only meaningfully disagree on the narrow range of topics where our info is on the order of one bit, topics where it is in fact worth the bother to explore our disagreements. 

Note also that for these key talkable topics, making an analysis mistake on just one bit of relevant info is typically sufficient to induce large probability changes, and thus large apparent disagreements. And for most topics it is quite hard to think and talk without making at least one bit’s worth of error. Especially if we consume 34GB per day! So its completely to be expected that we will often find ourselves disagreeing on talkable topics at the level of few bits.

So maybe cut yourself and others a bit more slack about your disagreements? And maybe you should be more okay with our using mechanisms like betting markets to average out these errors. You really can’t be that confident that it is you who has made the fewest analysis errors. 

GD Star Rating
loading...
Tagged as:

Range

A wide-ranging review of research … rocked psychology because it showed experience simply did not create skill in a wide range of real-world scenarios, from college administrators assessing student potential to psychiatrists predicting patient performance to human resources professionals deciding who will succeed in job training. In those domains, which involved human behavior and where patterns did not clearly repeat, repetition did not cause learning. Chess, golf, and firefighting are exceptions, not the rule. …

In wicked domains, the rules of the game are often unclear or incomplete, there may or may not be repetitive patterns and they may not be obvious, and feedback is often delayed, inaccurate, or both. In the most devilishly wicked learning environments, experience will reinforce the exact wrong lessons. (more)

David Epstein’s book Range is a needed correction to other advice often heard lately, that the secret of life success is to specialize as early as possible. While early specializing works in some areas, more commonly one learns more by ranging more widely, collecting analogies and tools which can be applied too many new problems, and better learning which specialties fits you best.

I’ve done a lot of wide ranging in my life, so I naturally like this advice. However, as one can obviously take this advice too far, the hard question is how widely to range for how long, and then how quickly to narrow when.

Alas, Epstein seems less useful on this hard tradeoff question. He does make it plausible that your chance of achieving the very highest success in creative areas like art or research is maximized by a wider range than is typical. But as most people have little chance of reaching such heights, this doesn’t say much to them.

I’m struck by the fact that all of his concrete examples of wide rangers who succeeded are people who at some point specialized to enough gain status within a particular speciality area. He gives stats which suggest that wide rangers continue to be productive and useful to society even if they never specialize so much, but those people are apparently not seen as personal successes.

For example, Epstein cites a study showing that innovative academic papers which cite journals never before cited in the same paper are published at first in less prestigious journals, but eventually get more citations. Yet in fields like economics, status depends much more on journal prestige than eventual citations.

So while you might contribute more to the world by continuing to range widely, you often succeed more personally by ranging somewhat widely at first, and then specializing enough to make specialists see you as one of them.

The hard problem then is how to get specialists to credit you for advancing their field when they don’t see you as a high status one of them. Epstein quotes people who say we should just fund all research topics even if they don’t seem promising, but that obviously just won’t work.

GD Star Rating
loading...
Tagged as: ,

Stephenson’s Em Fantasy

Neal Stephenson’s Snow Crash (’92) and Diamond Age (’95) were once some of my favorite science fiction novels. And his Anathem (’08) is the very favorite of a friend. So hearing that his new book Fall; or, Dodge in Hell (’19) is about ems, I had to read it. And given that I’m author of Age of Em and care much for science fiction realism, I had to evaluate this story in those terms. (Other reviews don’t seem to care: 1 2 3 4 5)

Alas, in terms of em realism, this book disappoints. To explain, I’m going to have to give spoilers; you are warned. Continue reading "Stephenson’s Em Fantasy" »

GD Star Rating
loading...
Tagged as: ,

Decision Markets for Monetary Policy

The goals of monetary policy are to promote maximum employment, stable prices and moderate long-term interest rates. By implementing effective monetary policy, the Fed can maintain stable prices, thereby supporting conditions for long-term economic growth and maximum employment. (more)

Caltech, where I got my PhD in social science, doesn’t have specialists in macroeconomics, and they don’t teach the subject to grad students. They just don’t respect the area enough, they told me. And I haven’t gone out of my way to make up this deficit in my background; other areas have seemed more interesting. So I mostly try not to have or express opinions on macroeconomics

I periodically hear arguments for NGDP Targeting, such as from Scott Sumner, who at one point titles his argument “How Prediction Markets Can Improve Monetary Policy: A Case Study.” But as far as I can tell, while this proposal does use market prices in some ways, it depends more on specific macroeconomic beliefs than a prediction markets approach needs to. 

These specific beliefs may be well supported beliefs, I don’t know. But, I think it is worth pointing out that if we are willing to consider radical changes, we could instead switch to an approach that depends less on particular macroeconomic beliefs: decision markets. Monetary policy seems an especially good case to apply decision markets because they clearly have two required features: 1) A clear set of discrete decision options, where it is clear afterward which option was taken, 2) A reasonably strong consensus on measurable outcomes that such decisions are trying to increase. 

That is, monetary policy consists of clear public and discrete choices, such as on short term interest rates. Call each discrete choice option C. And it is widely agreed that the point of this policy is to promote long term growth, in part via moderating the business cycle. So some weighted average of real growth, inflation, unemployment, and perhaps a few more after-the-fact business cycle indicators, over the next decade or two seems a sufficient summary of the desired outcome. Let’s call this summary outcome O.  

So monetary policy just needs to pick a standard metric O that will be known in a decade or two, estimate E[O|C] for each choice C under consideration, and compare these estimates. And this is exactly the sort of thing that decisions markets can do well. There are some subtitles about how exactly to do it best. But many variations should work pretty well. 

For example, I doubt it matters that much how exactly we weight the contributions to O. And to cut off skepticism on causality, we could use a 1% chance of making each discrete choice randomly, and have decision market estimates be conditional on that random choice. Suffering a 1% randomness seems a pretty low cost to cut off skepticism.

For more, see the section “Monetary Policy Example” in my paper Shall We Vote on Values, But Bet on Beliefs?

GD Star Rating
loading...
Tagged as: