Monthly Archives: May 2022

Argument Selection Bias

One strategy to decide what to believe about X is to add up all the pro and con arguments that one is aware of regarding X, weighing each by its internal strength. Yes, it might not be obvious how to judge and combine arguments. But this strategy has a bigger problem; it risks a selection bias. What if the process that makes you aware of arguments has selected non-randomly from all the possible arguments?

One solution is to focus on very simple arguments. You might be able to exhaustively consider all arguments below some threshold of simplicity. However, here you still have to worry that simple arguments tend to favor a particular side of X. For example, if the question is “Is there some complex technical solution to simple problem X”, it may not work well to exclude all complex technical solution proposals.

We often see situations where far more effort seems to go into finding, honing, and publicizing pro-X arguments, relative to anti-X arguments. In this case the key question is what processes induced those asymmetric efforts. For example, as the left tends to dominate the high end of academia, very academic policy arguments strongly favor left policies. So the question is: what process induced such people to become left?

If new academics started out equally distributed on the left and right, and then searched among academic arguments, becoming more left only as they discovered mainly only left arguments in that space, then we wouldn’t have so much of a selection bias to worry about. However, if the initial distribution of academics leans heavily left for non-argument reasons, then there could be a big selection bias among very academic arguments, even if not perhaps among the arguments that induced people to become academics in the first place.

Often there are claims X where not only does most everyone support X, most everyone is also eager to repeat arguments favoring X, to identify and repudiate any who oppose X, and to ridicule their supporting arguments. In these cases, there is far less energy and effort available to find, hone, and express anti-X claims. For example, consider topics related to racism, sexism, pedophilia, inequality, IQ, genes, or the value of school and medicine. In these cases we should expect strong selection biases favoring X, and thus for weight-of-argument purposes we should adjust our opinions to less favor these X.

However, sometimes there are contrarian claims X where far more effort goes into finding, honing, and expressing arguments supporting X. Consider the claims of 911-truthers, for example. Here we should expect a bias against X among the simple arguments that most people would use to justify their dismissing X, but a bias favoring X among the more complex arguments that 911-truthers would find when studying the many details close to the issue.

What if a topic is local, of interest only to your immediate associates? In this case you should expect a bias favoring those who are more motivated to want others to believe X, and favoring those who are just generally better at finding, honing, and expressing arguments. Thus being known to be good at arguing should generally make one less effective at persuading associates.

In larger social worlds, however, where arguments can pass through many intermediaries, it won’t work as well to discount arguments by the abilities of their sources. In that case one will have to discount arguments based on overall features of the communities who favor and oppose X. Here those who are especially good at arguing will be especially tempted to join such discussions, as their audience is less able to apply personal discounts regarding their arguing abilities.

In all of these cases, we would ideally adjust our standards for discounting beliefs continuously, with the many parameters by which we estimate context-dependent selection biases. But we may sometimes instead feel constrained in our abilities to make such adjustments. Our lower level mental processes may just weigh up the arguments they hear without applying enough discounts.

In which case we might just want to limit our exposure to the sources that we expect to be unusually subject to favorable selection biases. This may sometimes justify common practices of sticking one’s head in the sand, and fingers in one’s ears, regarding suspect sources. And we might also reasonably show a “perverse” forbidden-fruit fascination with hearing arguments that favor forbidden views.

GD Star Rating
Tagged as: ,

You Choose Inequality

A simple but reasonable definition of inequality says that moving any part of a distribution toward its median value (while holding the rest of the distribution constant) reduces the inequality in that distribution. Moving a part away from the median value increases inequality.

The median adult income worldwide is ~$1000 ($3000 per household), and median wealth is $7500. If you make/have more than those amounts, and if you are trying to increase your personal income and wealth, then if successful your efforts will increase inequality in those distributions. The same applies for any distribution where you are above the median; your efforts to increase your personal value are efforts to increase inequality.

Thus you are trying to increase inequality if you try to increase your number of Twitter followers but have more than 200 now. And that’s compared to other people on Twitter. Compared the the median human, even going from zero to one Twitter followers increases inequality.

The median firm has four employees, so if your firm is larger, and you try to grow your firm, you are trying to increase inequality across firms. The median publication has one citation, and the median human has zero publication, so your trying to increase either of those numbers regarding yourself is trying to increase inequality.

Medians for the US are an IQ of 98, reading comprehension of 7th/8th grade, 4 books read per year, and 6.3/4.3 lifetime sex partners (M/F age 25-49). So if you are in US, your personal figures are higher, and you try to increase those figures, you are trying to increase US inequality. And if US numbers are higher than the world, you also increase world inequality, and that’s even true for many lower personal values.

You might try to justify improving any one above-median X by pointing to other Y on which you are below median, saying that you are a loser overall trying to improve your overall position. But are you really a loser compared to all humans alive today, or all humans ever so far, or all creatures ever so far?

Sometimes people try to justify their above-median efforts by claiming that they mainly fight against those who are even higher in the distribution than they. For example, their firm competes mainly with even larger firms, or their publications compete mainly with even more popular publications. But this just can’t be true for as many people as try to claim this justification. So how can we judge who are in fact the rare above-median Robin Hoods, taking from the even richer?

For everyone else, it seems you should admit that either (A) you count for more than others, so that your increases are more worthwhile than theirs, or (B) while reducing inequality is a nice goal, you have judged that it is just not as worthy a goal as just increasing these numbers in general, for anyone and everyone.

Added 6am: Sure, if your efforts to raise yourself also happen to also raise the entire rest of the distribution by the same proportion or amount, or cause especially big rises for some below median folks. then that may not increase inequality. But if your efforts raise yourself more than they raise others, the inequality effect issue remains.

GD Star Rating
Tagged as:

Decision Market Math

Let me share a bit of math I recently figured out regarding decision markets. And let me illustrate it with Fire-The-CEO markets.

Consider two ways that we can split $1 cash into two pieces. One way is: $1 = “$1 if A” + “$1 if not A”, where A is 1 or 0 depending on if a firm CEO stays in power til the end of the current quarter. Once we know the value of A, exactly one of these two assets can be exchanged for $1; the other is worthless. The chance a of the CEO staying is revealed by trades exchanging one unit of “$1 if A” for a units of $1.

The other way to split is $1 = “$x” + “$(1-x)”, where x a real number in [0,1], representing the stock price of that firm at quarter end, except rescaled and clipped so that x is always in [0,1]. Once we know the value of x, then one unit of “$x” can be exchanged for x units of $1, while one unit of “$(1-x)” can be exchanged for 1-x units of $1. The expected value x of the stock is revealed by trades exchanging one unit of “$x” for x units of $1.

We can combine this pair of two-way splits into a single four-way split:
$1 = “$x if A” + “$x if not A” + “$(1-x) if A” + “$(1-x) if not A”.
A simple combinatorial trading implementation would keep track of the quantities each user has of these four assets, and allow them to trade some of these assets for others, as long as none of these quantities became negative. The min of these four quantities is the cash amount that a user can walk away with at any time. And at quarter’s end, the rest turn into some amount of cash, which the user can then walk away with.

To advise the firm board on whether to fire the CEO, we are interested in the value that the CEO adds to the firm value. We can define this added value as x1-x2, where
x1 = E[x|A] is revealed by trades exchanging 1 unit of “$x if A” for x1 units of “$1 if A”
x2 = E[x|not A] is revealed by trades exchanging 1 unit of “$x if not A” for x2 units of “$1 if not A”.

In principle users could trade any bundle of these four assets for any other bundle. But three kinds of trades have the special feature of supporting maximal use of user assets in the following sense: when users make trades of only that type, two of their four asset quantities will reach zero at the same time. Reaching zero sets the limit of how far a user can trade in that direction.

To see this, let us define:
d1 = change in quantity of “$x if A”,
d2 = change in quantity of “$x if not A”,
d3 = change in quantity of “$(1-x) if A”,
d4 = change in quantity of “$(1-x) if not A”.

Two of these special kinds of trades correspond to the simple A and x trades that we described above. One kind exchanges 1 unit of “$1 if A” for a units of $1, so that d1=d3, d2=d4, -d1*(1-a)=a*d2. The other kind exchanges 1 unit of “$x” for x units of $1, so that d1=d2, d3=d4, -d1*(1-x)=x*d3.

The third special trade bundles the diagonals of our 2×2 array of assets, so that d1=d4, d2=d3, -q*d1=(1-q)*d2. But what does q mean? That’s the math I worked out: q = (1-a) + (2a-1)*x + 2a(1-a)*r*x, where r = (x1-x2)/x, and x = a*x1 + (1-a)*x2. So when we have market prices a,x from the other two special markets, we can describe trade ratios q in this diagonal market in terms of the more intuitive parameter r, i.e., the percent value the CEO adds to this firm.

When you subsidize markets with many possible dimensions of trade, you don’t have to subsidize all the dimensions equally. So in this case you could subsidize the q=r type trades much more than you do the a or x type trades. This would let you take a limited subsidy budget and direct it as much as possible toward the main dimension of interest: this CEO’s added value.

GD Star Rating
Tagged as:

New-Hire Prediction Markets

In my last post, I suggested that the most promising place to test and develop prediction markets is this: get ordinary firms to pay for mechanisms that induce their associates to advise their key decisions. I argued that what we need most is a regime of flexible trial and error, searching in the space of topics, participants, incentives, etc. for approaches that can add value here while avoiding the political disruptions that have plagued previous trials.

If you had a firm willing to participate in such a process, you’d want to be opportunistic about the topics of your initial trials. You’d ask them what are their most important decisions, and then seek topics that could inform some of those decisions cheaply, quickly, and repeatedly, to allow rapid learning from experimentation. But what if you don’t have such a firm on the hook, and instead seek a development plan to attract many firms?

In this case, instead of planning to curate a set of topics specific to your available firm, you might want to find and focus on a general class of topics likely to be especially valuable and feasible in roughly the same way at a wide range of firms. When focused on such a class, trials at any one firm should be more informative about the potential for trials at other firms.

One plausible candidate is: deadlines. A great many firms have projects with deadlines, and are uncertain on if they will meet those deadlines. They should want to know not only the chance of making the deadline, but how that chance might change if they changed the project’s resources, requirements, or management. If one drills down to smaller sub-projects, whose deadlines tend to be sooner, this can allow for many trials within short time periods. Alas, this topic is also especially disruptive, as markets here tend to block project managers’ favorite excuses for deadline failure.

Here’s my best-guess topic area: new hires. Most small firms, and small parts of big firms, hire a few new people every year, where they pay special attention to comparing each candidate to small pool of “final round” candidates. And these choices are very important; they add up to a big fraction of total firm decision value. Furthermore, most firms also have a standard practice of periodically issuing employee evaluations that are comparable across employees. Thus one could create prediction markets estimating the N-year-later (N=2?) employee evaluation of each final candidate, conditional on their being hired, as advice about whom to hire.
Yes, having to wait two years to settle bets is a big disadvantage, slowing the rate at which trial and error can improve practice. Yes, at many firms employee evaluations are a joke, unable to bear any substantial load of criticism or attention. Yes, you might worry about work colleauges trying to sabotage the careers of new hires that they bet against. And yes, new hire candidates would have to agree to have their application evaluated by everyone in the potential pool of market participants, at least if they reach the final round.

Even so, the value here seems so large as to make it well worth trying to overcome these obstacles. Few firms can be that happy with their new hire choices, reasonably fearing they are missing out on better options. And once you had a system working for final round hire choices, it could plausibly be extended to earlier hiring decision rounds.

Yes, this is related to my proposal to use prediction markets to fire CEOs. But that’s about firing, and this is about hiring. And while each CEO choice is very valuable, there is far more total value encompassed in all the lower personnel choices.

GD Star Rating
Tagged as: ,

Prediction Markets Need Trial & Error

We economists have a pretty strong consensus on a few key points: 1) innovation is the main cause of long-term economic growth, 2) social institutions are a key changeable determinant of social outcomes, and 3) inducing the collection and aggregation of info is one of the key functions of social institutions. In addition, better institutional-methods for collecting and aggregating info (ICAI) could help with the key meta-problems of making all other important choices, including the choice of our other institutions, especially institutions to promote innovation. Together all these points suggest that one of the best ways that we today could help the future is to innovate better ICAI.

After decades pondering the topic, I’ve concluded that prediction markets (and closely related techs) are our most promising candidate for a better ICAI; they are relatively simple and robust with a huge range of potential high-value applications. But, alas, they still need more tests and development before wider audiences can be convinced to adopt them.

The usual (good) advice to innovators is to develop a new tech first in the application areas where it can attract the highest total customer revenue, and also where customer value can pay for the highest unit costs. As the main direct value of ICAI is to advise decisions, we should thus seek the body of customers most willing to pay money for better decisions, and then focus, when possible, on their highest-value versions.

Compared to charities, governments, and individuals, for-profit firms are more used to paying money for things that they value, including decision advice. And the decisions of such firms encompass a large fraction, perhaps most, of the decision value in our society. This suggests that we should seek to develop and test prediction markets first in the context of typical decisions of ordinary business, slanted when possible toward their highest value decisions.

The customer who would plausibly pay the most here is the decision maker seeing related info, not those who want to lobby for particular decisions, nor those who want to brag about how accurate is their info. And they will usually prefer ways to elicit advice from their associates, instead of from distant curated panels of advisors.

We have so far seen dozens of efforts to use prediction markets to advise decisions inside ordinary firms. Typically, users are satisfied and feel included, costs are modest, and market estimates are similarly or substantially more accurate than other available estimates. Even so, experiments typically end within a few years, often due to political disruption. For example, market estimates can undermine manager excuses (e.g., “we missed the deadline due to a rare unexpected last-minute problem”), and managers dislike seeing their public estimates beaten by market estimates.

Here’s how to understand this: “Innovation matches elegant ideas to messy details.” While general thinkers can identify and hone the elegant ideas, the messy details must usually come from context-dependent trial and error. So for prediction markets, we must search in the space of detailed context-dependent ways to structure and deploy them, to find variations that cut their disruptions. First find variations that work in smaller contexts, then move up to larger trials. This seems feasible, as we’ve already done so for other potentially-politically-disruptive ICAI, such as cost-accounting, AB-tests, and focus groups.

Note that, being atheoretical and context-dependent, this needed experimentation poorly supports academic publications, making academics less interested. Nor can these experiments be enabled merely with money; they crucially need one or more organizations willing to be disrupted by many often-disruptive trials.

Ideally those who oversee this process would be flexible, willing and able as needed to change timescales, topics, participants, incentives, and who-can-see-what structures. An d such trials should be done where those in the org feel sufficiently free to express their aversion to political disruption, to allow the search process to learn to avoid it. Alas, I have so far failed to persuade any organizations to host or fund such experimentation.

This is my best guess for the most socially valuable way to spend ~<$1M. Prediction markets offer enormous promise to realize vast social value, but it seems that promise will remain only potential until someone undertakes the small-scale experiments needed to find the messy details to match its elegant ideas. Will that be you?

GD Star Rating
Tagged as: , ,

Lottery Lawsuits, For Small Harm Law

Twenty-five years ago I posted a short essay, on which I commented ten years later. Let me now elaborate on an improved variation of that same idea.

Imagine you came out from the grocery store to find a scratch on the side of your car door, a scratch that matches the position of the door on the car next to yours. You estimate they’ve done you $100 of damage. But in our world today this is where the story ends, as it would usually be crazy to spend thousands on a lawyer to sue them for such a small amount. So law today does little to discourage such harms. People can sloppily scratch car doors without fearing that they will have to pay damages.

Now imagine a better world. You take a few pictures of the two cars, including their license plate, and then use a phone app to upload all this and officially declare that they owe you $100 in damages. Using the license plate photo, the car owner is identified and notified, and is issued a “ticket” in that amount, like tickets are now issued for parking violations. If they accept your claim and pay that amount, then it goes to you, and the issue is closed. (Same if they offer you a smaller amount to settle, which you accept.) Unpaid tickets accumulate in the usual way, and the local government uses its usual methods to try to get people to pay them.

The ticket is also settled, and no longer counted as unpaid, if they refuse to accept your claims, but still deposit at least $100. And if they do this, then you must also deposit at least $100. (You each might want to deposit more than this $100 min to help with trial legal fees.)

Both of you also submit a chance, like one in a thousand, and then both of your deposits are converted into lottery claims at the smaller of the two chances. Claims which are then soon (i.e., in a few days) resolved (perhaps via collecting many similar legal cases.) So if the smallest submitted chance was one in a thousand, then 999 times out of a thousand, both of your deposits disappear, and you are both notified that the issue is now settled.

However, one time out of a thousand, you both win the lottery, and then each of your accounts now holds 1000 times what you deposited there. At which point you could also settle the suit.

But if you don’t settle, then your lawsuit goes to trial, and if the court rules that their car door scratch hurt you by $100, then they now owe you $100K, to be paid out of their account. However if the court rules against you, and also affirms their automatic countersuit, that your suit was frivolous, then you now owe them $100K, to be paid out of your account. Once each of you has paid what you owe, any remaining funds in your accounts are returned to you each in cash, tax-free.

In this better world, if they scratch your car, then they expect to pay $100 on average, and you expect to get that on average. But if you just frivolously sue someone for $100, without a plausible prospect of winning, then you expect to pay $100 on average, and they expect to gain that amount. And thus in this world the prospect of such lawsuits changes behavior, toward more optimal care, just as it usually does for large harms today. Law now works to discourage small as well as large harms.

Note that we might want to set some lower limit on allowed lottery chances, such as via a max limit on how much can appear in your account after winning. It also seems fine to let people sell their claims, and also to insure against these lottery risks, perhaps even by depositing money in other accounts to be won exactly when the main lottery is lost. And once notified, defendants should be required to save relevant info on a case until its lottery is resolved.

The key idea here is this: If I’m willing to suffer a lottery risk to sue you, you must also suffer the same lottery risk to defend yourself.

That is, if I claim that you hurt me and am willing to deposit an amount to cover your counter-claim that I’ve sued you frivolously, then I can force you to make (at least) the same size deposit, after which both of our deposits, and also our legal claims against each other, are converted into lottery claims. If we win this lottery, we do a trial the usual way, except now with larger stakes.

GD Star Rating
Tagged as: ,

Why Did Religion Change?

In his new book How Religion Evolved: And Why It Endures, Robin Dunbar reviews many details of the history and correlates of religion. He says that religion’s main function is to aid group cohesion, and that religion has been key to allowing humans to sustain larger groups. While I have doubts about how well he explains these details, his book gives me an excuse to return to this important topic.

To review his review, many animals travel in groups to protect against outside threats, via treating other group members either generically or via simple status ladders. But primates formed large groups via treating every other member differently, especially via hugs and grooming. Primates thus needed big brains to manage the politics of such groups.

Our especially big brains have allowed human groups to get even larger before fragmenting due to internal conflicts. To further support the social cohesion that can sustain larger groups, we evolved smiles, laughter, singing, dancing, communal eating, drugs, sacrifices, humor, emotional story telling, more rituals, and moralizing gods. And we felt closer to associates with whom we shared language, place of origin, child to adult path, hobbies, worldview, musical taste, and sense of humor.

We also evolved trance/mystic experiences, often with romance-like feelings, and often enhanced by drugs. And we evolved supernatural beliefs, with which we made sense of the world, felt power over it, and could accuse associates of witchcraft. Religion is built especially on these two foundations.

All of this makes sense to me. The puzzle I see is that we’ve seen big changes over time, in which the relative importance of these factors has changed. How can we explain such changes?

Language seems to have arisen about 500Kya. Our earliest spirituality apparently included altered mental states such as trances, and animism, wherein most everything around us had a spirit. Roughly 100Kya our ancestors started to put valuable goods into their graves, suggesting beliefs in an afterlife. Such beliefs seem to go together with ancestor worship, and with shamans who specialize in religion.

Starting roughly 10Kya, with the farming revolution, humans started to live more densely, and built special religious spaces. They also found more potent drugs, such as poppy seeds and beer. We then turned to ritual (often human) sacrifice to capricious gods. In larger communities, we soon after saw social stratification, including a separate classes of priests, especially when food storage was possible. In this kind of religion, rituals were a communal duty, to placate the gods, and individual beliefs were unimportant.

Then starting roughly 4Kya, near the “Axial Age”, we saw the rise of a new kind of religion associated with farming and herding in even larger communities, at the latitudes where such larger communities were possible. Most “traditional” religions of today arose during this ancient era. This type of religion was centered on individual beliefs in moralizing gods described in writings that told of stories and doctrines. Religion became a personal duty, often resulting from a personal choice, and love and forgiveness came to matter more, relative to the sheer power of gods.

Finally, we have recently seen a great and somewhat puzzling decline in religion, apparently in association with rising wealth, even though we still have great needs for group cohesion.

To explain these changes, it helps little to point to the timeless advantages of these many strategies. For example, both gods who punish moral violations and also capricious gods who demand ritual sacrifices seem useful for promoting social cohesion. So why did they arise at the particular times they did? Dunbar doesn’t say much on this.

It seems to me that we must focus on using changes to explain changes. For example, perhaps communal responsibility for ritual sacrifice became much more socially potent when aided by drugs in the new religious spaces built for new larger denser communities. And perhaps personal responsibility and beliefs toward moralizing loving gods became much more socially potent when aided by priests who consulted written stories and doctrines.

If “woke” is a new “religion,” then it seems a complement to drugs, it lacks sacred texts, and it often sacrifices humans to pay for a collective guilt, in front of big crowds in the special big public spaces of social media. And it seems to create a new class of priests, and perhaps also a new stratification of the population. That sure sounds a lot like the religious style of the first half of the farming era; is that style returning now?

GD Star Rating
Tagged as: ,

Foom Update

To extend our reach, we humans have built tools, machines, firms, and nations. And as these are powerful, we try to maintain control of them. But as efforts to control them usually depend on their details, we have usually waited to think about how to control them until we had concrete examples in front of us. In the year 1000, for example, there wasn’t much we could do to usefully think about how to control most things that have only appeared in the last two centuries, such as cars or international courts.

Someday we will have far more powerful computer tools, including “advanced artificial general intelligence” (AAGI), i.e., with capabilities even higher and broader than those of individual human brains today. And some people today spend substantial efforts today worrying about how we will control these future tools. Their most common argument for this unusual strategy is “foom”.

That is, they postulate a single future computer system, initially quite weak and fully controlled by its human sponsors, but capable of action in the world and with general values to drive such action. Then over a short time (days to weeks) this system dramatically improves (i.e., “fooms”) to become an AAGI far more capable even than the sum total of all then-current humans and computer systems. This happens via a process of self-reflection and self-modification, and this self-modification also produces large and unpredictable changes to its effective values. They seek to delay this event until they can find a way to prevent such dangerous “value drift”, and to persuade those who might initiate such an event to use that method.

I’ve argued at length (1 2 3 4 5 6 7) against the plausibility of this scenario. Its not that its impossible, or that no one should work on it, but that far too many take it as a default future scenario. But I haven’t written on it for many years now, so perhaps it is time for an update. Recently we have seen noteworthy progress in AI system demos (if not yet commercial application), and some have urged me to update my views as a result.

The recent systems have used relative simple architectures and basic algorithms to produce models with enormous numbers of parameters from very large datasets. Compared to prior systems, these systems have produced impressive performance on an impressively wide range of tasks. Even though they are still quite far from displacing humans in any substantial fraction of their current tasks.

For the purpose of reconsidering foom, however, the key things to notice are: (1) these systems have kept their values quite simple and very separate from the rest of the system, and (2) they have done basically zero self-reflection or self-improvement. As I see AAGI as still a long way off, the features of these recent systems can only offer weak evidence regarding the features of AAGI.

Even so, recent developments offer little support for the hypothesis that AAGI will be created soon via the process of self-reflection and self-improvement, or for the hypothesis that such a process risks large “value drifts”. These current ways that we are now moving toward AAGI just don’t look much like the foom scenario. And I don’t see them as saying much about whether ems or AAGI will appear first.

Again, I’m not saying foom is impossible, just that it looks unlikely, and that recent events haven’t made it seem moreso.

These new systems do suggest a substantial influence of architecture on system performance, though not obviously at a level out of line with that in most prior AI systems. And note that the abilities of the very best systems here are not that much better than that of the 2nd and 3rd best systems, arguing weakly against AAGI scenarios where the best system is vastly better.

GD Star Rating
Tagged as:

What Do We Owe The World?

We each owe some degree of consideration to our close associates, and to larger groups with which we associate. But what do we owe the larger world and universe?

Some of us think we should each put in some effort to improve that universe. Not just that this would be nice, but that we are morally obligated to do so. But how should we think about this obligation?

We could try to collect a long list of specific things different people should do for the world, and how those things vary with context. But is there a simpler more general way to describe these obligations?

We can think of ourself as made up of many smaller selves, at each moment in time, and in each possible world. And then standard expected utility theory says that we maximize a weighted sum across these many sub-selves. So a natural way to include the rest of the universe is to expand this weighted sum to include every other creature, and their many component selves.

The relative weight we put on others might vary with their distance in spacetime, and with their similarity to us. But a general problem with this approach is that in many scenarios we will either want to do near nothing or near everything. If we consider some large group of others, then as we increase the weight we put on members of that group, at first we will want to do very little for them, and then as the weight passes a key threshold we suddenly switch to wanting to put most all of our efforts into helping them. The weight must be finely tuned to induce intermediate efforts.

If intermediate levels of help sound more reasonable, one way to get that is to talk in terms of a budget: we might each have an obligation to spend at least some fixed fraction of our resources helping the world. Resources such as money, time, reputation, etc. The simplest version of this would require the same fraction for everyone, though more complex versions could make this vary with context.

Bryan Caplan’s new book is titled How Evil Are Politicians?, based on this essay wherein he seems to embrace something like a budget obligation story, except with politicians having much larger budget obligations:

If you’re in a position to pass or enforce laws, lives and freedom are in your hands. Common decency requires … politicians to make … intellectual hygiene their top priority. Until they calmly recuse themselves from their society and energetically weigh a wide range of moral arguments, they have no business lifting a political finger. At this point, the iniquity of practicing politicians should be clear. How much time and mental energy does the average politician pour into moral due diligence? A few hours a year seems like a high estimate. They don’t just fall a tad short of their moral obligations. They’re too busy passing laws and giving orders to face the possibility that they’re wielding power illegitimately.

To check on all this, I did a series of Twitter polls asking what fraction of their resources different kinds of people are obligated to spend trying to help the world. Here are the resulting (median of lognormal-fit) % estimates:

The basic % of budget moral framing seems confirmed by many answering these questions and few complaining about the framing. Furthermore, respondents do seem to think this budget varies with type of person, and agree with Caplan that politicians have much higher obligations.

However, respondents had enormously divergent opinions on what is that obligation budget % (median standard dev. is a factor of ~18), and even the middle estimates in the chart above seem to me to vary way too much across types of people. It seems to me unfair to demand far more efforts by others than you are willing to make. And it seems disrespectful to demand far less from other kinds of people, as if you don’t see them as sufficiently human to hold them to moral standards.

This looks to me more like a status story, wherein we try to hold higher status people to higher moral standards, as some sort of “progressive taxation” of status. And while progressive taxation might make sense for governments, having moral obligations vary this strongly with status just doesn’t make much sense of my moral intuitions. We should all try to help others, at least to some similarly modest degree.

Added 10a: The prior numbers in the table were wrong due to a math mistake, now fixed.

GD Star Rating
Tagged as: ,